Oct 09 09:31:57 localhost kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct 09 09:31:57 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 09 09:31:57 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 09 09:31:57 localhost kernel: BIOS-provided physical RAM map:
Oct 09 09:31:57 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 09 09:31:57 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 09 09:31:57 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 09 09:31:57 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Oct 09 09:31:57 localhost kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Oct 09 09:31:57 localhost kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Oct 09 09:31:57 localhost kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Oct 09 09:31:57 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 09 09:31:57 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 09 09:31:57 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Oct 09 09:31:57 localhost kernel: NX (Execute Disable) protection: active
Oct 09 09:31:57 localhost kernel: APIC: Static calls initialized
Oct 09 09:31:57 localhost kernel: SMBIOS 2.8 present.
Oct 09 09:31:57 localhost kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Oct 09 09:31:57 localhost kernel: Hypervisor detected: KVM
Oct 09 09:31:57 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 09 09:31:57 localhost kernel: kvm-clock: using sched offset of 1898363775067 cycles
Oct 09 09:31:57 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 09 09:31:57 localhost kernel: tsc: Detected 2445.406 MHz processor
Oct 09 09:31:57 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 09 09:31:57 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 09 09:31:57 localhost kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Oct 09 09:31:57 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 09 09:31:57 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 09 09:31:57 localhost kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Oct 09 09:31:57 localhost kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Oct 09 09:31:57 localhost kernel: Using GB pages for direct mapping
Oct 09 09:31:57 localhost kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct 09 09:31:57 localhost kernel: ACPI: Early table checksum verification disabled
Oct 09 09:31:57 localhost kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Oct 09 09:31:57 localhost kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 09:31:57 localhost kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 09:31:57 localhost kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 09:31:57 localhost kernel: ACPI: FACS 0x000000007FFDFC80 000040
Oct 09 09:31:57 localhost kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 09:31:57 localhost kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 09:31:57 localhost kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 09:31:57 localhost kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Oct 09 09:31:57 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Oct 09 09:31:57 localhost kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Oct 09 09:31:57 localhost kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Oct 09 09:31:57 localhost kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Oct 09 09:31:57 localhost kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Oct 09 09:31:57 localhost kernel: No NUMA configuration found
Oct 09 09:31:57 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Oct 09 09:31:57 localhost kernel: NODE_DATA(0) allocated [mem 0x27ffd5000-0x27fffffff]
Oct 09 09:31:57 localhost kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Oct 09 09:31:57 localhost kernel: Zone ranges:
Oct 09 09:31:57 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 09 09:31:57 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 09 09:31:57 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000027fffffff]
Oct 09 09:31:57 localhost kernel:   Device   empty
Oct 09 09:31:57 localhost kernel: Movable zone start for each node
Oct 09 09:31:57 localhost kernel: Early memory node ranges
Oct 09 09:31:57 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 09 09:31:57 localhost kernel:   node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Oct 09 09:31:57 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000027fffffff]
Oct 09 09:31:57 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Oct 09 09:31:57 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 09 09:31:57 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 09 09:31:57 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 09 09:31:57 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 09 09:31:57 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 09 09:31:57 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 09 09:31:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 09 09:31:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 09 09:31:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 09 09:31:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 09 09:31:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 09 09:31:57 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 09 09:31:57 localhost kernel: TSC deadline timer available
Oct 09 09:31:57 localhost kernel: CPU topo: Max. logical packages:   4
Oct 09 09:31:57 localhost kernel: CPU topo: Max. logical dies:       4
Oct 09 09:31:57 localhost kernel: CPU topo: Max. dies per package:   1
Oct 09 09:31:57 localhost kernel: CPU topo: Max. threads per core:   1
Oct 09 09:31:57 localhost kernel: CPU topo: Num. cores per package:     1
Oct 09 09:31:57 localhost kernel: CPU topo: Num. threads per package:   1
Oct 09 09:31:57 localhost kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Oct 09 09:31:57 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 09 09:31:57 localhost kernel: kvm-guest: KVM setup pv remote TLB flush
Oct 09 09:31:57 localhost kernel: kvm-guest: setup PV sched yield
Oct 09 09:31:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 09 09:31:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 09 09:31:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 09 09:31:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 09 09:31:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Oct 09 09:31:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Oct 09 09:31:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Oct 09 09:31:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Oct 09 09:31:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Oct 09 09:31:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Oct 09 09:31:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 09 09:31:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 09 09:31:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 09 09:31:57 localhost kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Oct 09 09:31:57 localhost kernel: Booting paravirtualized kernel on KVM
Oct 09 09:31:57 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 09 09:31:57 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Oct 09 09:31:57 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Oct 09 09:31:57 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u524288 alloc=1*2097152
Oct 09 09:31:57 localhost kernel: pcpu-alloc: [0] 0 1 2 3 
Oct 09 09:31:57 localhost kernel: kvm-guest: PV spinlocks enabled
Oct 09 09:31:57 localhost kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Oct 09 09:31:57 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 09 09:31:57 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct 09 09:31:57 localhost kernel: random: crng init done
Oct 09 09:31:57 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 09 09:31:57 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 09 09:31:57 localhost kernel: Fallback order for Node 0: 0 
Oct 09 09:31:57 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 09 09:31:57 localhost kernel: Policy zone: Normal
Oct 09 09:31:57 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 09 09:31:57 localhost kernel: software IO TLB: area num 4.
Oct 09 09:31:57 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Oct 09 09:31:57 localhost kernel: ftrace: allocating 49370 entries in 193 pages
Oct 09 09:31:57 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 09 09:31:57 localhost kernel: Dynamic Preempt: voluntary
Oct 09 09:31:57 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 09 09:31:57 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 09 09:31:57 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Oct 09 09:31:57 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 09 09:31:57 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 09 09:31:57 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 09 09:31:57 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 09 09:31:57 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Oct 09 09:31:57 localhost kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct 09 09:31:57 localhost kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct 09 09:31:57 localhost kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct 09 09:31:57 localhost kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Oct 09 09:31:57 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 09 09:31:57 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 09 09:31:57 localhost kernel: Console: colour VGA+ 80x25
Oct 09 09:31:57 localhost kernel: printk: console [ttyS0] enabled
Oct 09 09:31:57 localhost kernel: ACPI: Core revision 20230331
Oct 09 09:31:57 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 09 09:31:57 localhost kernel: x2apic enabled
Oct 09 09:31:57 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 09 09:31:57 localhost kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Oct 09 09:31:57 localhost kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Oct 09 09:31:57 localhost kernel: kvm-guest: setup PV IPIs
Oct 09 09:31:57 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 09 09:31:57 localhost kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406)
Oct 09 09:31:57 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 09 09:31:57 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 09 09:31:57 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 09 09:31:57 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 09 09:31:57 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 09 09:31:57 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 09 09:31:57 localhost kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Oct 09 09:31:57 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 09 09:31:57 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 09 09:31:57 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 09 09:31:57 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 09 09:31:57 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 09 09:31:57 localhost kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Oct 09 09:31:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 09 09:31:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 09 09:31:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 09 09:31:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Oct 09 09:31:57 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 09 09:31:57 localhost kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Oct 09 09:31:57 localhost kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Oct 09 09:31:57 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 09 09:31:57 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 09 09:31:57 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 09 09:31:57 localhost kernel: landlock: Up and running.
Oct 09 09:31:57 localhost kernel: Yama: becoming mindful.
Oct 09 09:31:57 localhost kernel: SELinux:  Initializing.
Oct 09 09:31:57 localhost kernel: LSM support for eBPF active
Oct 09 09:31:57 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 09 09:31:57 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 09 09:31:57 localhost kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Oct 09 09:31:57 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 09 09:31:57 localhost kernel: ... version:                0
Oct 09 09:31:57 localhost kernel: ... bit width:              48
Oct 09 09:31:57 localhost kernel: ... generic registers:      6
Oct 09 09:31:57 localhost kernel: ... value mask:             0000ffffffffffff
Oct 09 09:31:57 localhost kernel: ... max period:             00007fffffffffff
Oct 09 09:31:57 localhost kernel: ... fixed-purpose events:   0
Oct 09 09:31:57 localhost kernel: ... event mask:             000000000000003f
Oct 09 09:31:57 localhost kernel: signal: max sigframe size: 3376
Oct 09 09:31:57 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 09 09:31:57 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 09 09:31:57 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 09 09:31:57 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 09 09:31:57 localhost kernel: .... node  #0, CPUs:      #1 #2 #3
Oct 09 09:31:57 localhost kernel: smp: Brought up 1 node, 4 CPUs
Oct 09 09:31:57 localhost kernel: smpboot: Total of 4 processors activated (19563.24 BogoMIPS)
Oct 09 09:31:57 localhost kernel: node 0 deferred pages initialised in 18ms
Oct 09 09:31:57 localhost kernel: Memory: 7768032K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 615456K reserved, 0K cma-reserved)
Oct 09 09:31:57 localhost kernel: devtmpfs: initialized
Oct 09 09:31:57 localhost kernel: x86/mm: Memory block size: 128MB
Oct 09 09:31:57 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 09 09:31:57 localhost kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Oct 09 09:31:57 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 09 09:31:57 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 09 09:31:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 09 09:31:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 09 09:31:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 09 09:31:57 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 09 09:31:57 localhost kernel: audit: type=2000 audit(1760002315.499:1): state=initialized audit_enabled=0 res=1
Oct 09 09:31:57 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 09 09:31:57 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 09 09:31:57 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 09 09:31:57 localhost kernel: cpuidle: using governor menu
Oct 09 09:31:57 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 09 09:31:57 localhost kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Oct 09 09:31:57 localhost kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Oct 09 09:31:57 localhost kernel: PCI: Using configuration type 1 for base access
Oct 09 09:31:57 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 09 09:31:57 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 09 09:31:57 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 09 09:31:57 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 09 09:31:57 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 09 09:31:57 localhost kernel: Demotion targets for Node 0: null
Oct 09 09:31:57 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 09 09:31:57 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 09 09:31:57 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 09 09:31:57 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 09 09:31:57 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 09 09:31:57 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 09 09:31:57 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 09 09:31:57 localhost kernel: ACPI: Interpreter enabled
Oct 09 09:31:57 localhost kernel: ACPI: PM: (supports S0 S5)
Oct 09 09:31:57 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 09 09:31:57 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 09 09:31:57 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 09 09:31:57 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Oct 09 09:31:57 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 09 09:31:57 localhost kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 09 09:31:57 localhost kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Oct 09 09:31:57 localhost kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Oct 09 09:31:57 localhost kernel: PCI host bridge to bus 0000:00
Oct 09 09:31:57 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Oct 09 09:31:57 localhost kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 09 09:31:57 localhost kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:57 localhost kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct 09 09:31:57 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Oct 09 09:31:57 localhost kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Oct 09 09:31:57 localhost kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Oct 09 09:31:57 localhost kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Oct 09 09:31:57 localhost kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Oct 09 09:31:57 localhost kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Oct 09 09:31:57 localhost kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Oct 09 09:31:57 localhost kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Oct 09 09:31:57 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct 09 09:31:57 localhost kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct 09 09:31:57 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:02: extended config space not accessible
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [1] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [2] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [3] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [4] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [5] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [6] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [7] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [8] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [9] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [10] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [11] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [12] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [13] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [14] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [15] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [16] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [17] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [18] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [19] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [20] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [21] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [22] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [23] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [24] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [25] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [26] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [27] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [28] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [29] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [30] registered
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [31] registered
Oct 09 09:31:57 localhost kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 09 09:31:57 localhost kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Oct 09 09:31:57 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-2] registered
Oct 09 09:31:57 localhost kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct 09 09:31:57 localhost kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Oct 09 09:31:57 localhost kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-3] registered
Oct 09 09:31:57 localhost kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Oct 09 09:31:57 localhost kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Oct 09 09:31:57 localhost kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-4] registered
Oct 09 09:31:57 localhost kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Oct 09 09:31:57 localhost kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-5] registered
Oct 09 09:31:57 localhost kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Oct 09 09:31:57 localhost kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-6] registered
Oct 09 09:31:57 localhost kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct 09 09:31:57 localhost kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]
Oct 09 09:31:57 localhost kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-7] registered
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-8] registered
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-9] registered
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-10] registered
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-11] registered
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-12] registered
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-13] registered
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-14] registered
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-15] registered
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-16] registered
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct 09 09:31:57 localhost kernel: acpiphp: Slot [0-17] registered
Oct 09 09:31:57 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Oct 09 09:31:57 localhost kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Oct 09 09:31:57 localhost kernel: iommu: Default domain type: Translated
Oct 09 09:31:57 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 09 09:31:57 localhost kernel: SCSI subsystem initialized
Oct 09 09:31:57 localhost kernel: ACPI: bus type USB registered
Oct 09 09:31:57 localhost kernel: usbcore: registered new interface driver usbfs
Oct 09 09:31:57 localhost kernel: usbcore: registered new interface driver hub
Oct 09 09:31:57 localhost kernel: usbcore: registered new device driver usb
Oct 09 09:31:57 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 09 09:31:57 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 09 09:31:57 localhost kernel: PTP clock support registered
Oct 09 09:31:57 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 09 09:31:57 localhost kernel: NetLabel: Initializing
Oct 09 09:31:57 localhost kernel: NetLabel:  domain hash size = 128
Oct 09 09:31:57 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 09 09:31:57 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 09 09:31:57 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 09 09:31:57 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 09 09:31:57 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 09 09:31:57 localhost kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Oct 09 09:31:57 localhost kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Oct 09 09:31:57 localhost kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 09 09:31:57 localhost kernel: vgaarb: loaded
Oct 09 09:31:57 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 09 09:31:57 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 09 09:31:57 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 09 09:31:57 localhost kernel: pnp: PnP ACPI init
Oct 09 09:31:57 localhost kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Oct 09 09:31:57 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 09 09:31:57 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 09 09:31:57 localhost kernel: NET: Registered PF_INET protocol family
Oct 09 09:31:57 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 09 09:31:57 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 09 09:31:57 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 09 09:31:57 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 09 09:31:57 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 09 09:31:57 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 09 09:31:57 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 09 09:31:57 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 09 09:31:57 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 09 09:31:57 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 09 09:31:57 localhost kernel: NET: Registered PF_XDP protocol family
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Oct 09 09:31:57 localhost kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Oct 09 09:31:57 localhost kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Oct 09 09:31:57 localhost kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Oct 09 09:31:57 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct 09 09:31:57 localhost kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct 09 09:31:57 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct 09 09:31:57 localhost kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct 09 09:31:57 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Oct 09 09:31:57 localhost kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct 09 09:31:57 localhost kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Oct 09 09:31:57 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 09 09:31:57 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 09 09:31:57 localhost kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Oct 09 09:31:57 localhost kernel: ACPI: bus type thunderbolt registered
Oct 09 09:31:57 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 09 09:31:57 localhost kernel: Initialise system trusted keyrings
Oct 09 09:31:57 localhost kernel: Key type blacklist registered
Oct 09 09:31:57 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 09 09:31:57 localhost kernel: zbud: loaded
Oct 09 09:31:57 localhost kernel: integrity: Platform Keyring initialized
Oct 09 09:31:57 localhost kernel: integrity: Machine keyring initialized
Oct 09 09:31:57 localhost kernel: Freeing initrd memory: 86104K
Oct 09 09:31:57 localhost kernel: NET: Registered PF_ALG protocol family
Oct 09 09:31:57 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 09 09:31:57 localhost kernel: Key type asymmetric registered
Oct 09 09:31:57 localhost kernel: Asymmetric key parser 'x509' registered
Oct 09 09:31:57 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 09 09:31:57 localhost kernel: io scheduler mq-deadline registered
Oct 09 09:31:57 localhost kernel: io scheduler kyber registered
Oct 09 09:31:57 localhost kernel: io scheduler bfq registered
Oct 09 09:31:57 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Oct 09 09:31:57 localhost kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Oct 09 09:31:57 localhost kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Oct 09 09:31:57 localhost kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Oct 09 09:31:57 localhost kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Oct 09 09:31:57 localhost kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Oct 09 09:31:57 localhost kernel: shpchp 0000:01:00.0: Slot initialization failed
Oct 09 09:31:57 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 09 09:31:57 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 09 09:31:57 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 09 09:31:57 localhost kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Oct 09 09:31:57 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 09 09:31:57 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 09 09:31:57 localhost kernel: Non-volatile memory driver v1.3
Oct 09 09:31:57 localhost kernel: rdac: device handler registered
Oct 09 09:31:57 localhost kernel: hp_sw: device handler registered
Oct 09 09:31:57 localhost kernel: emc: device handler registered
Oct 09 09:31:57 localhost kernel: alua: device handler registered
Oct 09 09:31:57 localhost kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Oct 09 09:31:57 localhost kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Oct 09 09:31:57 localhost kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Oct 09 09:31:57 localhost kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Oct 09 09:31:57 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 09 09:31:57 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 09 09:31:57 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 09 09:31:57 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct 09 09:31:57 localhost kernel: usb usb1: SerialNumber: 0000:02:01.0
Oct 09 09:31:57 localhost kernel: hub 1-0:1.0: USB hub found
Oct 09 09:31:57 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 09 09:31:57 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 09 09:31:57 localhost kernel: usbserial: USB Serial support registered for generic
Oct 09 09:31:57 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 09 09:31:57 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 09 09:31:57 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 09 09:31:57 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 09 09:31:57 localhost kernel: rtc_cmos 00:03: RTC can wake from S4
Oct 09 09:31:57 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 09 09:31:57 localhost kernel: rtc_cmos 00:03: registered as rtc0
Oct 09 09:31:57 localhost kernel: rtc_cmos 00:03: setting system clock to 2025-10-09T09:31:57 UTC (1760002317)
Oct 09 09:31:57 localhost kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Oct 09 09:31:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 09 09:31:57 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 09 09:31:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 09 09:31:57 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 09 09:31:57 localhost kernel: usbcore: registered new interface driver usbhid
Oct 09 09:31:57 localhost kernel: usbhid: USB HID core driver
Oct 09 09:31:57 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 09 09:31:57 localhost kernel: Initializing XFRM netlink socket
Oct 09 09:31:57 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 09 09:31:57 localhost kernel: Segment Routing with IPv6
Oct 09 09:31:57 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 09 09:31:57 localhost kernel: mpls_gso: MPLS GSO support
Oct 09 09:31:57 localhost kernel: IPI shorthand broadcast: enabled
Oct 09 09:31:57 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 09 09:31:57 localhost kernel: AES CTR mode by8 optimization enabled
Oct 09 09:31:57 localhost kernel: sched_clock: Marking stable (1135001778, 143075661)->(1389961781, -111884342)
Oct 09 09:31:57 localhost kernel: registered taskstats version 1
Oct 09 09:31:57 localhost kernel: Loading compiled-in X.509 certificates
Oct 09 09:31:57 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 09 09:31:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 09 09:31:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 09 09:31:57 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 09 09:31:57 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 09 09:31:57 localhost kernel: Demotion targets for Node 0: null
Oct 09 09:31:57 localhost kernel: page_owner is disabled
Oct 09 09:31:57 localhost kernel: Key type .fscrypt registered
Oct 09 09:31:57 localhost kernel: Key type fscrypt-provisioning registered
Oct 09 09:31:57 localhost kernel: Key type big_key registered
Oct 09 09:31:57 localhost kernel: Key type encrypted registered
Oct 09 09:31:57 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 09 09:31:57 localhost kernel: Loading compiled-in module X.509 certificates
Oct 09 09:31:57 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 09 09:31:57 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 09 09:31:57 localhost kernel: ima: No architecture policies found
Oct 09 09:31:57 localhost kernel: evm: Initialising EVM extended attributes:
Oct 09 09:31:57 localhost kernel: evm: security.selinux
Oct 09 09:31:57 localhost kernel: evm: security.SMACK64 (disabled)
Oct 09 09:31:57 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 09 09:31:57 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 09 09:31:57 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 09 09:31:57 localhost kernel: evm: security.apparmor (disabled)
Oct 09 09:31:57 localhost kernel: evm: security.ima
Oct 09 09:31:57 localhost kernel: evm: security.capability
Oct 09 09:31:57 localhost kernel: evm: HMAC attrs: 0x1
Oct 09 09:31:57 localhost kernel: Running certificate verification RSA selftest
Oct 09 09:31:57 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 09 09:31:57 localhost kernel: Running certificate verification ECDSA selftest
Oct 09 09:31:57 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 09 09:31:57 localhost kernel: clk: Disabling unused clocks
Oct 09 09:31:57 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 09 09:31:57 localhost kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct 09 09:31:57 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 09 09:31:57 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct 09 09:31:57 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 09 09:31:57 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 09 09:31:57 localhost kernel: Run /init as init process
Oct 09 09:31:57 localhost kernel:   with arguments:
Oct 09 09:31:57 localhost kernel:     /init
Oct 09 09:31:57 localhost kernel:   with environment:
Oct 09 09:31:57 localhost kernel:     HOME=/
Oct 09 09:31:57 localhost kernel:     TERM=linux
Oct 09 09:31:57 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64
Oct 09 09:31:57 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 09 09:31:57 localhost systemd[1]: Detected virtualization kvm.
Oct 09 09:31:57 localhost systemd[1]: Detected architecture x86-64.
Oct 09 09:31:57 localhost systemd[1]: Running in initrd.
Oct 09 09:31:57 localhost systemd[1]: No hostname configured, using default hostname.
Oct 09 09:31:57 localhost systemd[1]: Hostname set to <localhost>.
Oct 09 09:31:57 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 09 09:31:57 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 09 09:31:57 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 09 09:31:57 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 09 09:31:57 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 09 09:31:57 localhost systemd[1]: Reached target Local File Systems.
Oct 09 09:31:57 localhost systemd[1]: Reached target Path Units.
Oct 09 09:31:57 localhost systemd[1]: Reached target Slice Units.
Oct 09 09:31:57 localhost systemd[1]: Reached target Swaps.
Oct 09 09:31:57 localhost systemd[1]: Reached target Timer Units.
Oct 09 09:31:57 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 09 09:31:57 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 09 09:31:57 localhost systemd[1]: Listening on Journal Socket.
Oct 09 09:31:57 localhost systemd[1]: Listening on udev Control Socket.
Oct 09 09:31:57 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 09 09:31:57 localhost systemd[1]: Reached target Socket Units.
Oct 09 09:31:57 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 09 09:31:57 localhost systemd[1]: Starting Journal Service...
Oct 09 09:31:57 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 09 09:31:57 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 09 09:31:57 localhost systemd[1]: Starting Create System Users...
Oct 09 09:31:57 localhost systemd[1]: Starting Setup Virtual Console...
Oct 09 09:31:57 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 09 09:31:57 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 09 09:31:57 localhost systemd[1]: Finished Create System Users.
Oct 09 09:31:57 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 09 09:31:57 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 09 09:31:57 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 09 09:31:57 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 09 09:31:57 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Oct 09 09:31:57 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 09 09:31:57 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Oct 09 09:31:57 localhost systemd-journald[282]: Journal started
Oct 09 09:31:57 localhost systemd-journald[282]: Runtime Journal (/run/log/journal/99ca1aa4a8fe49f8801977dd20980206) is 8.0M, max 153.6M, 145.6M free.
Oct 09 09:31:57 localhost systemd-sysusers[285]: Creating group 'users' with GID 100.
Oct 09 09:31:57 localhost systemd-sysusers[285]: Creating group 'dbus' with GID 81.
Oct 09 09:31:57 localhost systemd-sysusers[285]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 09 09:31:57 localhost systemd[1]: Started Journal Service.
Oct 09 09:31:57 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 09 09:31:57 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 09 09:31:57 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 09 09:31:58 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 09 09:31:58 localhost systemd[1]: Finished Setup Virtual Console.
Oct 09 09:31:58 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 09 09:31:58 localhost systemd[1]: Starting dracut cmdline hook...
Oct 09 09:31:58 localhost dracut-cmdline[301]: dracut-9 dracut-057-102.git20250818.el9
Oct 09 09:31:58 localhost dracut-cmdline[301]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 09 09:31:58 localhost systemd[1]: Finished dracut cmdline hook.
Oct 09 09:31:58 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 09 09:31:58 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 09 09:31:58 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 09 09:31:58 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 09 09:31:58 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 09 09:31:58 localhost kernel: RPC: Registered udp transport module.
Oct 09 09:31:58 localhost kernel: RPC: Registered tcp transport module.
Oct 09 09:31:58 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 09 09:31:58 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 09 09:31:58 localhost rpc.statd[416]: Version 2.5.4 starting
Oct 09 09:31:58 localhost rpc.statd[416]: Initializing NSM state
Oct 09 09:31:58 localhost rpc.idmapd[421]: Setting log level to 0
Oct 09 09:31:58 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 09 09:31:58 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 09 09:31:58 localhost systemd-udevd[434]: Using default interface naming scheme 'rhel-9.0'.
Oct 09 09:31:58 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 09 09:31:58 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 09 09:31:58 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 09 09:31:58 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 09 09:31:58 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 09 09:31:58 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 09 09:31:58 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 09 09:31:58 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 09 09:31:58 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 09 09:31:58 localhost systemd[1]: Reached target Network.
Oct 09 09:31:58 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 09 09:31:58 localhost systemd[1]: Starting dracut initqueue hook...
Oct 09 09:31:58 localhost kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Oct 09 09:31:58 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 09 09:31:58 localhost kernel:  vda: vda1
Oct 09 09:31:58 localhost systemd-udevd[449]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:31:58 localhost systemd-udevd[451]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:31:58 localhost kernel: libata version 3.00 loaded.
Oct 09 09:31:58 localhost systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 09 09:31:58 localhost kernel: ahci 0000:00:1f.2: version 3.0
Oct 09 09:31:58 localhost kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Oct 09 09:31:58 localhost kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Oct 09 09:31:58 localhost kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Oct 09 09:31:58 localhost kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Oct 09 09:31:58 localhost kernel: scsi host0: ahci
Oct 09 09:31:58 localhost kernel: scsi host1: ahci
Oct 09 09:31:58 localhost kernel: scsi host2: ahci
Oct 09 09:31:58 localhost kernel: scsi host3: ahci
Oct 09 09:31:58 localhost kernel: scsi host4: ahci
Oct 09 09:31:58 localhost kernel: scsi host5: ahci
Oct 09 09:31:58 localhost kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 52 lpm-pol 0
Oct 09 09:31:58 localhost kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 52 lpm-pol 0
Oct 09 09:31:58 localhost kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 52 lpm-pol 0
Oct 09 09:31:58 localhost kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 52 lpm-pol 0
Oct 09 09:31:58 localhost kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 52 lpm-pol 0
Oct 09 09:31:58 localhost kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 52 lpm-pol 0
Oct 09 09:31:58 localhost systemd[1]: Reached target Initrd Root Device.
Oct 09 09:31:58 localhost kernel: ata3: SATA link down (SStatus 0 SControl 300)
Oct 09 09:31:58 localhost kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Oct 09 09:31:58 localhost kernel: ata5: SATA link down (SStatus 0 SControl 300)
Oct 09 09:31:58 localhost kernel: ata2: SATA link down (SStatus 0 SControl 300)
Oct 09 09:31:58 localhost kernel: ata4: SATA link down (SStatus 0 SControl 300)
Oct 09 09:31:58 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 09 09:31:58 localhost kernel: ata1.00: applying bridge limits
Oct 09 09:31:58 localhost kernel: ata1.00: configured for UDMA/100
Oct 09 09:31:58 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 09 09:31:58 localhost kernel: ata6: SATA link down (SStatus 0 SControl 300)
Oct 09 09:31:58 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 09 09:31:58 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 09 09:31:58 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 09 09:31:58 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 09 09:31:58 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 09 09:31:58 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 09 09:31:58 localhost systemd[1]: Reached target System Initialization.
Oct 09 09:31:58 localhost systemd[1]: Reached target Basic System.
Oct 09 09:31:58 localhost systemd[1]: Finished dracut initqueue hook.
Oct 09 09:31:58 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 09 09:31:58 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 09 09:31:58 localhost systemd[1]: Reached target Remote File Systems.
Oct 09 09:31:58 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 09 09:31:58 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 09 09:31:58 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct 09 09:31:59 localhost systemd-fsck[527]: /usr/sbin/fsck.xfs: XFS file system.
Oct 09 09:31:59 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 09 09:31:59 localhost systemd[1]: Mounting /sysroot...
Oct 09 09:31:59 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 09 09:31:59 localhost kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct 09 09:31:59 localhost kernel: XFS (vda1): Ending clean mount
Oct 09 09:31:59 localhost systemd[1]: Mounted /sysroot.
Oct 09 09:31:59 localhost systemd[1]: Reached target Initrd Root File System.
Oct 09 09:31:59 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 09 09:31:59 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 09 09:31:59 localhost systemd[1]: Reached target Initrd File Systems.
Oct 09 09:31:59 localhost systemd[1]: Reached target Initrd Default Target.
Oct 09 09:31:59 localhost systemd[1]: Starting dracut mount hook...
Oct 09 09:31:59 localhost systemd[1]: Finished dracut mount hook.
Oct 09 09:31:59 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 09 09:31:59 localhost rpc.idmapd[421]: exiting on signal 15
Oct 09 09:31:59 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 09 09:31:59 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 09 09:31:59 localhost systemd[1]: Stopped target Network.
Oct 09 09:31:59 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 09 09:31:59 localhost systemd[1]: Stopped target Timer Units.
Oct 09 09:31:59 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 09 09:31:59 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 09 09:31:59 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 09 09:31:59 localhost systemd[1]: Stopped target Basic System.
Oct 09 09:31:59 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 09 09:31:59 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 09 09:31:59 localhost systemd[1]: Stopped target Path Units.
Oct 09 09:31:59 localhost systemd[1]: Stopped target Remote File Systems.
Oct 09 09:31:59 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 09 09:31:59 localhost systemd[1]: Stopped target Slice Units.
Oct 09 09:31:59 localhost systemd[1]: Stopped target Socket Units.
Oct 09 09:31:59 localhost systemd[1]: Stopped target System Initialization.
Oct 09 09:31:59 localhost systemd[1]: Stopped target Local File Systems.
Oct 09 09:31:59 localhost systemd[1]: Stopped target Swaps.
Oct 09 09:31:59 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped dracut mount hook.
Oct 09 09:31:59 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 09 09:31:59 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 09 09:31:59 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 09 09:31:59 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 09 09:31:59 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 09 09:31:59 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 09 09:31:59 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 09 09:31:59 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 09 09:31:59 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 09 09:31:59 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 09 09:31:59 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 09 09:31:59 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 09 09:31:59 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Closed udev Control Socket.
Oct 09 09:31:59 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Closed udev Kernel Socket.
Oct 09 09:31:59 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 09 09:31:59 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 09 09:31:59 localhost systemd[1]: Starting Cleanup udev Database...
Oct 09 09:31:59 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 09 09:31:59 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 09 09:31:59 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Stopped Create System Users.
Oct 09 09:31:59 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 09 09:31:59 localhost systemd[1]: Finished Cleanup udev Database.
Oct 09 09:31:59 localhost systemd[1]: Reached target Switch Root.
Oct 09 09:31:59 localhost systemd[1]: Starting Switch Root...
Oct 09 09:31:59 localhost systemd[1]: Switching root.
Oct 09 09:31:59 localhost systemd-journald[282]: Received SIGTERM from PID 1 (systemd).
Oct 09 09:31:59 localhost systemd-journald[282]: Journal stopped
Oct 09 09:32:00 compute-1 kernel: audit: type=1404 audit(1760002319.671:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 09 09:32:00 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 09 09:32:00 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 09 09:32:00 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 09 09:32:00 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 09 09:32:00 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 09 09:32:00 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 09 09:32:00 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 09 09:32:00 compute-1 kernel: audit: type=1403 audit(1760002319.779:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 09 09:32:00 compute-1 systemd[1]: Successfully loaded SELinux policy in 110.681ms.
Oct 09 09:32:00 compute-1 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.967ms.
Oct 09 09:32:00 compute-1 systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 09 09:32:00 compute-1 systemd[1]: Detected virtualization kvm.
Oct 09 09:32:00 compute-1 systemd[1]: Detected architecture x86-64.
Oct 09 09:32:00 compute-1 systemd[1]: Hostname set to <compute-1>.
Oct 09 09:32:00 compute-1 systemd-sysv-generator[612]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:32:00 compute-1 systemd-rc-local-generator[609]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:32:00 compute-1 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 09 09:32:00 compute-1 systemd[1]: Stopped Switch Root.
Oct 09 09:32:00 compute-1 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 09 09:32:00 compute-1 systemd[1]: Created slice Slice /system/getty.
Oct 09 09:32:00 compute-1 systemd[1]: Created slice Slice /system/serial-getty.
Oct 09 09:32:00 compute-1 systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 09 09:32:00 compute-1 systemd[1]: Created slice User and Session Slice.
Oct 09 09:32:00 compute-1 systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 09 09:32:00 compute-1 systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 09 09:32:00 compute-1 systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 09 09:32:00 compute-1 systemd[1]: Reached target Local Encrypted Volumes.
Oct 09 09:32:00 compute-1 systemd[1]: Stopped target Switch Root.
Oct 09 09:32:00 compute-1 systemd[1]: Stopped target Initrd File Systems.
Oct 09 09:32:00 compute-1 systemd[1]: Stopped target Initrd Root File System.
Oct 09 09:32:00 compute-1 systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 09 09:32:00 compute-1 systemd[1]: Reached target Path Units.
Oct 09 09:32:00 compute-1 systemd[1]: Reached target rpc_pipefs.target.
Oct 09 09:32:00 compute-1 systemd[1]: Reached target Slice Units.
Oct 09 09:32:00 compute-1 systemd[1]: Reached target Local Verity Protected Volumes.
Oct 09 09:32:00 compute-1 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 09 09:32:00 compute-1 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 09 09:32:00 compute-1 systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 09 09:32:00 compute-1 systemd[1]: Reached target RPC Port Mapper.
Oct 09 09:32:00 compute-1 systemd[1]: Listening on Process Core Dump Socket.
Oct 09 09:32:00 compute-1 systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 09 09:32:00 compute-1 systemd[1]: Listening on udev Control Socket.
Oct 09 09:32:00 compute-1 systemd[1]: Listening on udev Kernel Socket.
Oct 09 09:32:00 compute-1 systemd[1]: Mounting Huge Pages File System...
Oct 09 09:32:00 compute-1 systemd[1]: Mounting /dev/hugepages1G...
Oct 09 09:32:00 compute-1 systemd[1]: Mounting /dev/hugepages2M...
Oct 09 09:32:00 compute-1 systemd[1]: Mounting POSIX Message Queue File System...
Oct 09 09:32:00 compute-1 systemd[1]: Mounting Kernel Debug File System...
Oct 09 09:32:00 compute-1 systemd[1]: Mounting Kernel Trace File System...
Oct 09 09:32:00 compute-1 systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 09 09:32:00 compute-1 systemd[1]: Starting Create List of Static Device Nodes...
Oct 09 09:32:00 compute-1 systemd[1]: Load legacy module configuration was skipped because no trigger condition checks were met.
Oct 09 09:32:00 compute-1 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 09 09:32:00 compute-1 systemd[1]: Starting Load Kernel Module configfs...
Oct 09 09:32:00 compute-1 systemd[1]: Starting Load Kernel Module drm...
Oct 09 09:32:00 compute-1 systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 09 09:32:00 compute-1 systemd[1]: Starting Load Kernel Module fuse...
Oct 09 09:32:00 compute-1 systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 09 09:32:00 compute-1 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 09 09:32:00 compute-1 systemd[1]: Stopped File System Check on Root Device.
Oct 09 09:32:00 compute-1 systemd[1]: Stopped Journal Service.
Oct 09 09:32:00 compute-1 kernel: fuse: init (API version 7.37)
Oct 09 09:32:00 compute-1 systemd[1]: Starting Journal Service...
Oct 09 09:32:00 compute-1 systemd[1]: Starting Load Kernel Modules...
Oct 09 09:32:00 compute-1 systemd[1]: Starting Generate network units from Kernel command line...
Oct 09 09:32:00 compute-1 systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 09 09:32:00 compute-1 systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 09 09:32:00 compute-1 systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 09 09:32:00 compute-1 systemd[1]: Starting Coldplug All udev Devices...
Oct 09 09:32:00 compute-1 kernel: ACPI: bus type drm_connector registered
Oct 09 09:32:00 compute-1 systemd[1]: Mounted Huge Pages File System.
Oct 09 09:32:00 compute-1 systemd[1]: Mounted /dev/hugepages1G.
Oct 09 09:32:00 compute-1 systemd-journald[658]: Journal started
Oct 09 09:32:00 compute-1 systemd-journald[658]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.6M, 145.6M free.
Oct 09 09:32:00 compute-1 systemd[1]: Queued start job for default target Multi-User System.
Oct 09 09:32:00 compute-1 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 09 09:32:00 compute-1 systemd[1]: Started Journal Service.
Oct 09 09:32:00 compute-1 systemd[1]: Mounted /dev/hugepages2M.
Oct 09 09:32:00 compute-1 systemd[1]: Mounted POSIX Message Queue File System.
Oct 09 09:32:00 compute-1 systemd[1]: Mounted Kernel Debug File System.
Oct 09 09:32:00 compute-1 systemd[1]: Mounted Kernel Trace File System.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Create List of Static Device Nodes.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 09 09:32:00 compute-1 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Load Kernel Module configfs.
Oct 09 09:32:00 compute-1 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 09 09:32:00 compute-1 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Load Kernel Module drm.
Oct 09 09:32:00 compute-1 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 09 09:32:00 compute-1 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Load Kernel Module fuse.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Generate network units from Kernel command line.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 09 09:32:00 compute-1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 09 09:32:00 compute-1 systemd-modules-load[659]: Inserted module 'br_netfilter'
Oct 09 09:32:00 compute-1 kernel: Bridge firewalling registered
Oct 09 09:32:00 compute-1 systemd[1]: Activating swap /swap...
Oct 09 09:32:00 compute-1 systemd[1]: Mounting FUSE Control File System...
Oct 09 09:32:00 compute-1 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 09 09:32:00 compute-1 systemd[1]: Rebuild Hardware Database was skipped because of an unmet condition check (ConditionNeedsUpdate=/etc).
Oct 09 09:32:00 compute-1 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 09 09:32:00 compute-1 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 09 09:32:00 compute-1 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 09 09:32:00 compute-1 systemd[1]: Starting Load/Save OS Random Seed...
Oct 09 09:32:00 compute-1 systemd[1]: Create System Users was skipped because no trigger condition checks were met.
Oct 09 09:32:00 compute-1 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 09 09:32:00 compute-1 systemd[1]: Activated swap /swap.
Oct 09 09:32:00 compute-1 systemd-journald[658]: Time spent on flushing to /var/log/journal/42833e1b511a402df82cb9cb2fc36491 is 7.586ms for 1155 entries.
Oct 09 09:32:00 compute-1 systemd-journald[658]: System Journal (/var/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 4.0G, 3.9G free.
Oct 09 09:32:00 compute-1 systemd-journald[658]: Received client request to flush runtime journal.
Oct 09 09:32:00 compute-1 systemd[1]: Mounted FUSE Control File System.
Oct 09 09:32:00 compute-1 systemd[1]: Reached target Swaps.
Oct 09 09:32:00 compute-1 systemd-modules-load[659]: Inserted module 'nf_conntrack'
Oct 09 09:32:00 compute-1 systemd[1]: Finished Load Kernel Modules.
Oct 09 09:32:00 compute-1 systemd[1]: Starting Apply Kernel Variables...
Oct 09 09:32:00 compute-1 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Load/Save OS Random Seed.
Oct 09 09:32:00 compute-1 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 09 09:32:00 compute-1 systemd[1]: Finished Apply Kernel Variables.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 09 09:32:00 compute-1 systemd[1]: Reached target Preparation for Local File Systems.
Oct 09 09:32:00 compute-1 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 09 09:32:00 compute-1 systemd[1]: Reached target Local File Systems.
Oct 09 09:32:00 compute-1 systemd[1]: Starting Import network configuration from initramfs...
Oct 09 09:32:00 compute-1 systemd[1]: Rebuild Dynamic Linker Cache was skipped because no trigger condition checks were met.
Oct 09 09:32:00 compute-1 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 09 09:32:00 compute-1 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 09 09:32:00 compute-1 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 09 09:32:00 compute-1 systemd[1]: Starting Automatic Boot Loader Update...
Oct 09 09:32:00 compute-1 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 09 09:32:00 compute-1 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 09 09:32:00 compute-1 systemd[1]: Finished Coldplug All udev Devices.
Oct 09 09:32:00 compute-1 bootctl[676]: Couldn't find EFI system partition, skipping.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Automatic Boot Loader Update.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Import network configuration from initramfs.
Oct 09 09:32:00 compute-1 systemd[1]: Starting Create Volatile Files and Directories...
Oct 09 09:32:00 compute-1 systemd-udevd[678]: Using default interface naming scheme 'rhel-9.0'.
Oct 09 09:32:00 compute-1 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 09 09:32:00 compute-1 systemd[1]: Starting Load Kernel Module configfs...
Oct 09 09:32:00 compute-1 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Load Kernel Module configfs.
Oct 09 09:32:00 compute-1 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 09 09:32:00 compute-1 systemd-udevd[718]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:32:00 compute-1 systemd[1]: Finished Create Volatile Files and Directories.
Oct 09 09:32:00 compute-1 systemd[1]: Starting Security Auditing Service...
Oct 09 09:32:00 compute-1 systemd[1]: Starting RPC Bind...
Oct 09 09:32:00 compute-1 systemd[1]: Rebuild Journal Catalog was skipped because of an unmet condition check (ConditionNeedsUpdate=/var).
Oct 09 09:32:00 compute-1 systemd[1]: Update is Completed was skipped because no trigger condition checks were met.
Oct 09 09:32:00 compute-1 auditd[730]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 09 09:32:00 compute-1 auditd[730]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 09 09:32:00 compute-1 systemd[1]: Started RPC Bind.
Oct 09 09:32:00 compute-1 kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Oct 09 09:32:00 compute-1 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 09 09:32:00 compute-1 augenrules[735]: /sbin/augenrules: No change
Oct 09 09:32:00 compute-1 augenrules[754]: No rules
Oct 09 09:32:00 compute-1 augenrules[754]: enabled 1
Oct 09 09:32:00 compute-1 augenrules[754]: failure 1
Oct 09 09:32:00 compute-1 augenrules[754]: pid 730
Oct 09 09:32:00 compute-1 augenrules[754]: rate_limit 0
Oct 09 09:32:00 compute-1 augenrules[754]: backlog_limit 8192
Oct 09 09:32:00 compute-1 augenrules[754]: lost 0
Oct 09 09:32:00 compute-1 augenrules[754]: backlog 4
Oct 09 09:32:00 compute-1 augenrules[754]: backlog_wait_time 60000
Oct 09 09:32:00 compute-1 augenrules[754]: backlog_wait_time_actual 0
Oct 09 09:32:00 compute-1 augenrules[754]: enabled 1
Oct 09 09:32:00 compute-1 augenrules[754]: failure 1
Oct 09 09:32:00 compute-1 augenrules[754]: pid 730
Oct 09 09:32:00 compute-1 augenrules[754]: rate_limit 0
Oct 09 09:32:00 compute-1 augenrules[754]: backlog_limit 8192
Oct 09 09:32:00 compute-1 augenrules[754]: lost 0
Oct 09 09:32:00 compute-1 augenrules[754]: backlog 4
Oct 09 09:32:00 compute-1 augenrules[754]: backlog_wait_time 60000
Oct 09 09:32:00 compute-1 augenrules[754]: backlog_wait_time_actual 0
Oct 09 09:32:00 compute-1 augenrules[754]: enabled 1
Oct 09 09:32:00 compute-1 augenrules[754]: failure 1
Oct 09 09:32:00 compute-1 augenrules[754]: pid 730
Oct 09 09:32:00 compute-1 augenrules[754]: rate_limit 0
Oct 09 09:32:00 compute-1 augenrules[754]: backlog_limit 8192
Oct 09 09:32:00 compute-1 augenrules[754]: lost 0
Oct 09 09:32:00 compute-1 augenrules[754]: backlog 4
Oct 09 09:32:00 compute-1 augenrules[754]: backlog_wait_time 60000
Oct 09 09:32:00 compute-1 augenrules[754]: backlog_wait_time_actual 0
Oct 09 09:32:00 compute-1 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Oct 09 09:32:00 compute-1 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 09 09:32:00 compute-1 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 09 09:32:00 compute-1 systemd[1]: Started Security Auditing Service.
Oct 09 09:32:00 compute-1 systemd-udevd[707]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:32:00 compute-1 kernel: iTCO_vendor_support: vendor-support=0
Oct 09 09:32:00 compute-1 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 09 09:32:00 compute-1 kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Oct 09 09:32:00 compute-1 kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Oct 09 09:32:00 compute-1 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 09 09:32:00 compute-1 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Oct 09 09:32:00 compute-1 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Oct 09 09:32:00 compute-1 kernel: Console: switching to colour dummy device 80x25
Oct 09 09:32:00 compute-1 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 09 09:32:00 compute-1 kernel: [drm] features: -context_init
Oct 09 09:32:00 compute-1 kernel: [drm] number of scanouts: 1
Oct 09 09:32:00 compute-1 kernel: [drm] number of cap sets: 0
Oct 09 09:32:00 compute-1 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Oct 09 09:32:00 compute-1 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 09 09:32:00 compute-1 kernel: Console: switching to colour frame buffer device 160x50
Oct 09 09:32:00 compute-1 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 09 09:32:00 compute-1 kernel: kvm_amd: TSC scaling supported
Oct 09 09:32:00 compute-1 kernel: kvm_amd: Nested Virtualization enabled
Oct 09 09:32:00 compute-1 kernel: kvm_amd: Nested Paging enabled
Oct 09 09:32:00 compute-1 kernel: kvm_amd: LBR virtualization supported
Oct 09 09:32:00 compute-1 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Oct 09 09:32:00 compute-1 kernel: kvm_amd: Virtual GIF supported
Oct 09 09:32:01 compute-1 systemd[1]: Reached target System Initialization.
Oct 09 09:32:01 compute-1 systemd[1]: Started dnf makecache --timer.
Oct 09 09:32:01 compute-1 systemd[1]: Started Daily rotation of log files.
Oct 09 09:32:01 compute-1 systemd[1]: Started Run system activity accounting tool every 10 minutes.
Oct 09 09:32:01 compute-1 systemd[1]: Started Generate summary of yesterday's process accounting.
Oct 09 09:32:01 compute-1 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 09 09:32:01 compute-1 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 09 09:32:01 compute-1 systemd[1]: Reached target Timer Units.
Oct 09 09:32:01 compute-1 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 09 09:32:01 compute-1 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 09 09:32:01 compute-1 systemd[1]: Reached target Socket Units.
Oct 09 09:32:01 compute-1 systemd[1]: Starting D-Bus System Message Bus...
Oct 09 09:32:01 compute-1 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 09 09:32:01 compute-1 systemd[1]: Started D-Bus System Message Bus.
Oct 09 09:32:01 compute-1 systemd[1]: Reached target Basic System.
Oct 09 09:32:01 compute-1 dbus-broker-lau[789]: Ready
Oct 09 09:32:01 compute-1 systemd[1]: Starting NTP client/server...
Oct 09 09:32:01 compute-1 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 09 09:32:01 compute-1 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 09 09:32:01 compute-1 systemd[1]: Started irqbalance daemon.
Oct 09 09:32:01 compute-1 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 09 09:32:01 compute-1 systemd[1]: Starting Create netns directory...
Oct 09 09:32:01 compute-1 systemd[1]: Starting Netfilter Tables...
Oct 09 09:32:01 compute-1 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 09:32:01 compute-1 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 09:32:01 compute-1 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 09:32:01 compute-1 systemd[1]: Reached target sshd-keygen.target.
Oct 09 09:32:01 compute-1 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 09 09:32:01 compute-1 systemd[1]: Reached target User and Group Name Lookups.
Oct 09 09:32:01 compute-1 systemd[1]: Starting Resets System Activity Logs...
Oct 09 09:32:01 compute-1 systemd[1]: Starting User Login Management...
Oct 09 09:32:01 compute-1 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 09 09:32:01 compute-1 systemd[1]: Finished Resets System Activity Logs.
Oct 09 09:32:01 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 09:32:01 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 09:32:01 compute-1 systemd[1]: Finished Create netns directory.
Oct 09 09:32:01 compute-1 systemd-logind[798]: New seat seat0.
Oct 09 09:32:01 compute-1 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 09 09:32:01 compute-1 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 09 09:32:01 compute-1 chronyd[805]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 09 09:32:01 compute-1 systemd[1]: Started User Login Management.
Oct 09 09:32:01 compute-1 chronyd[805]: Frequency -10.194 +/- 4.106 ppm read from /var/lib/chrony/drift
Oct 09 09:32:01 compute-1 chronyd[805]: Loaded seccomp filter (level 2)
Oct 09 09:32:01 compute-1 systemd[1]: Started NTP client/server.
Oct 09 09:32:01 compute-1 systemd[1]: Finished Netfilter Tables.
Oct 09 09:32:01 compute-1 cloud-init[824]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 09 Oct 2025 09:32:01 +0000. Up 5.32 seconds.
Oct 09 09:32:01 compute-1 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 09 09:32:01 compute-1 systemd[1]: Reached target Preparation for Network.
Oct 09 09:32:01 compute-1 systemd[1]: Starting Open vSwitch Database Unit...
Oct 09 09:32:01 compute-1 chown[826]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 09 09:32:01 compute-1 ovs-ctl[831]: Starting ovsdb-server [  OK  ]
Oct 09 09:32:01 compute-1 ovs-vsctl[880]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 09 09:32:02 compute-1 ovs-vsctl[890]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"1479fb1d-afaa-427a-bdce-40294d3573d2\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 09 09:32:02 compute-1 ovs-ctl[831]: Configuring Open vSwitch system IDs [  OK  ]
Oct 09 09:32:02 compute-1 ovs-ctl[831]: Enabling remote OVSDB managers [  OK  ]
Oct 09 09:32:02 compute-1 systemd[1]: Started Open vSwitch Database Unit.
Oct 09 09:32:02 compute-1 ovs-vsctl[896]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct 09 09:32:02 compute-1 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 09 09:32:02 compute-1 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 09 09:32:02 compute-1 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 09 09:32:02 compute-1 kernel: openvswitch: Open vSwitch switching datapath
Oct 09 09:32:02 compute-1 ovs-ctl[940]: Inserting openvswitch module [  OK  ]
Oct 09 09:32:02 compute-1 kernel: ovs-system: entered promiscuous mode
Oct 09 09:32:02 compute-1 kernel: Timeout policy base is empty
Oct 09 09:32:02 compute-1 systemd-udevd[699]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:32:02 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 09 09:32:02 compute-1 kernel: vlan22: entered promiscuous mode
Oct 09 09:32:02 compute-1 systemd-udevd[695]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:32:02 compute-1 kernel: vlan20: entered promiscuous mode
Oct 09 09:32:02 compute-1 kernel: vlan21: entered promiscuous mode
Oct 09 09:32:02 compute-1 kernel: vlan23: entered promiscuous mode
Oct 09 09:32:02 compute-1 ovs-ctl[909]: Starting ovs-vswitchd [  OK  ]
Oct 09 09:32:02 compute-1 ovs-ctl[909]: Enabling remote OVSDB managers [  OK  ]
Oct 09 09:32:02 compute-1 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 09 09:32:02 compute-1 ovs-vsctl[980]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct 09 09:32:02 compute-1 systemd[1]: Starting Open vSwitch...
Oct 09 09:32:02 compute-1 systemd[1]: Finished Open vSwitch.
Oct 09 09:32:02 compute-1 systemd[1]: Starting Network Manager...
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.3925] NetworkManager (version 1.54.1-1.el9) is starting... (boot:f3feb77a-4486-4a5c-a8ab-abb4fe64c670)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.3927] Read config: /etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4012] manager[0x562384388040]: monitoring kernel firmware directory '/lib/firmware'.
Oct 09 09:32:02 compute-1 systemd[1]: Starting Hostname Service...
Oct 09 09:32:02 compute-1 systemd[1]: Started Hostname Service.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4607] hostname: hostname: using hostnamed
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4608] hostname: static hostname changed from (none) to "compute-1"
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4611] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4679] manager[0x562384388040]: rfkill: Wi-Fi hardware radio set enabled
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4680] manager[0x562384388040]: rfkill: WWAN hardware radio set enabled
Oct 09 09:32:02 compute-1 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4715] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4731] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4732] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4732] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4733] manager: Networking is enabled by state file
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4736] settings: Loaded settings plugin: keyfile (internal)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4755] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4819] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4834] dhcp: init: Using DHCP client 'internal'
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4836] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4845] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4853] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4858] device (lo): Activation: starting connection 'lo' (34b34808-e917-4b30-9031-19e79dc85e7b)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4865] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4869] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4884] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/3)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4886] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4898] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/4)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4900] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4911] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/5)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4912] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4924] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/6)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4926] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4940] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/7)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4943] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4955] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/8)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4957] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4962] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4963] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4968] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4970] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4974] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/11)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4975] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4979] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/12)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4980] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4985] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/13)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4986] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4991] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/14)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4993] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5000] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5002] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 systemd[1]: Started Network Manager.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5010] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5014] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5015] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5016] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5017] device (eth0): carrier: link connected
Oct 09 09:32:02 compute-1 systemd[1]: Reached target Network.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5018] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5019] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5019] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5020] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5020] device (eth1): carrier: link connected
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5026] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5031] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5035] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5038] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5041] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5044] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5047] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5051] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5052] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5053] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5055] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5056] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5057] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5058] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5062] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5063] policy: auto-activating connection 'ci-private-network' (66d16662-8a58-5f35-9b69-4caa739b599b)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5064] policy: auto-activating connection 'eth1-port' (14a02052-08d6-45e5-a948-6208b3559c65)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5065] policy: auto-activating connection 'br-ex-port' (375645b6-33fc-4c37-833a-d9e158ec94ba)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5065] policy: auto-activating connection 'br-ex-br' (4c59ca8f-34eb-40ef-9b98-d07f30800afa)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5066] policy: auto-activating connection 'vlan23-port' (9f7f603c-9def-45b8-9780-a4b31ecf01c3)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5067] policy: auto-activating connection 'vlan21-port' (b752045b-4ebf-405c-afa8-7f17ffa854e4)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5068] policy: auto-activating connection 'vlan20-port' (b852a895-766c-43f9-a4ca-5df9c8f35de0)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5068] policy: auto-activating connection 'vlan22-port' (bb0b9b67-5f18-4a32-a914-7d219a79010b)
Oct 09 09:32:02 compute-1 systemd[1]: Starting Network Manager Wait Online...
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5071] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5072] device (eth1): Activation: starting connection 'ci-private-network' (66d16662-8a58-5f35-9b69-4caa739b599b)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5074] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (14a02052-08d6-45e5-a948-6208b3559c65)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5075] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (375645b6-33fc-4c37-833a-d9e158ec94ba)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5078] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (4c59ca8f-34eb-40ef-9b98-d07f30800afa)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5080] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (9f7f603c-9def-45b8-9780-a4b31ecf01c3)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5081] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b752045b-4ebf-405c-afa8-7f17ffa854e4)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5082] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (b852a895-766c-43f9-a4ca-5df9c8f35de0)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5083] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (bb0b9b67-5f18-4a32-a914-7d219a79010b)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5085] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5086] manager: NetworkManager state is now CONNECTING
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5087] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5091] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5093] device (eth1): state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:02 compute-1 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 09 09:32:02 compute-1 kernel: vlan22: left promiscuous mode
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5105] device (eth1): disconnecting for new activation request.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5112] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5118] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5121] device (br-ex)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:02 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5126] device (br-ex)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5128] device (eth1)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5134] device (eth1)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5134] device (vlan20)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5139] device (vlan20)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5139] device (vlan21)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:02 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5146] device (vlan21)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5151] device (vlan22)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5174] device (vlan22)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5174] device (vlan23)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5184] device (vlan23)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5184] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5189] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5190] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5199] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5203] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5218] device (eth1): disconnecting for new activation request.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5221] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 kernel: vlan23: left promiscuous mode
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5233] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5240] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 09 09:32:02 compute-1 systemd[1]: Started GSSAPI Proxy Daemon.
Oct 09 09:32:02 compute-1 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 09 09:32:02 compute-1 systemd[1]: Reached target NFS client services.
Oct 09 09:32:02 compute-1 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 09 09:32:02 compute-1 systemd[1]: Reached target Remote File Systems.
Oct 09 09:32:02 compute-1 kernel: vlan21: left promiscuous mode
Oct 09 09:32:02 compute-1 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5319] dhcp4 (eth0): state changed new lease, address=192.168.26.45
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5331] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5368] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5376] device (lo): Activation: successful, device activated.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5383] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5385] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5395] device (eth1): Activation: starting connection 'ci-private-network' (66d16662-8a58-5f35-9b69-4caa739b599b)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5401] device (br-ex)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5409] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (375645b6-33fc-4c37-833a-d9e158ec94ba)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5411] device (eth1)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5413] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (14a02052-08d6-45e5-a948-6208b3559c65)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5414] device (vlan20)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5417] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (b852a895-766c-43f9-a4ca-5df9c8f35de0)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5424] device (vlan21)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:02 compute-1 kernel: vlan20: left promiscuous mode
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5438] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b752045b-4ebf-405c-afa8-7f17ffa854e4)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5440] device (vlan22)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5443] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (bb0b9b67-5f18-4a32-a914-7d219a79010b)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5447] device (vlan23)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5454] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (9f7f603c-9def-45b8-9780-a4b31ecf01c3)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5459] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5465] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5472] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5512] policy: auto-activating connection 'vlan23-if' (7dd63e55-1e2e-4430-8cd7-274623908d35)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5514] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5518] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 kernel: virtio_net virtio5 eth1: left promiscuous mode
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5525] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5527] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5529] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5530] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5535] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5538] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5539] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5541] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5545] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5548] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5549] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5551] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5555] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5557] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5559] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5561] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5565] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5568] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5569] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5570] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5572] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5572] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5573] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5574] policy: auto-activating connection 'vlan21-if' (d6b7cede-f2f7-4e62-ad08-de1c498575f7)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5576] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 kernel: ovs-system: left promiscuous mode
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5579] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5594] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5600] policy: auto-activating connection 'vlan20-if' (b23c7af3-65be-42e5-ab4c-395931000901)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5602] policy: auto-activating connection 'vlan22-if' (fd6ff51b-7be2-4aef-b30a-8784503157ec)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5610] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5614] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5620] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (7dd63e55-1e2e-4430-8cd7-274623908d35)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5621] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5626] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5630] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5635] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5640] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5646] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5651] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5654] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5656] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5659] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (d6b7cede-f2f7-4e62-ad08-de1c498575f7)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5660] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5667] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5675] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (b23c7af3-65be-42e5-ab4c-395931000901)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5677] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5682] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5687] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (fd6ff51b-7be2-4aef-b30a-8784503157ec)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5687] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5689] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5693] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5698] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5700] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5709] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5713] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5714] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5715] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5718] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 kernel: ovs-system: entered promiscuous mode
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5720] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5721] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5724] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5725] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5726] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5729] policy: auto-activating connection 'br-ex-if' (01e3ccc4-9f50-4868-b5b2-19c55181f7c5)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5730] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5733] device (eth0): Activation: successful, device activated.
Oct 09 09:32:02 compute-1 kernel: No such timeout policy "ovs_test_tp"
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5736] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5738] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5739] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5740] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5740] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5741] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5741] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5743] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5748] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5757] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5771] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5777] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5787] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5796] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (01e3ccc4-9f50-4868-b5b2-19c55181f7c5)
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5796] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5817] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5822] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5825] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5828] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5830] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5833] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5838] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5845] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5850] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5854] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5859] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5865] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5870] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5872] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5880] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5883] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5884] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 kernel: vlan23: entered promiscuous mode
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5889] device (eth1): Activation: successful, device activated.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5981] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5989] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6012] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6014] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6019] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 09:32:02 compute-1 kernel: br-ex: entered promiscuous mode
Oct 09 09:32:02 compute-1 kernel: vlan22: entered promiscuous mode
Oct 09 09:32:02 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 09 09:32:02 compute-1 kernel: vlan20: entered promiscuous mode
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6195] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6204] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 kernel: vlan21: entered promiscuous mode
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6222] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:02 compute-1 systemd-udevd[709]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6233] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6247] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6249] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6255] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6269] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6270] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6273] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6278] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6299] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6370] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6370] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6380] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6407] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6420] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6443] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6444] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6449] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6452] manager: startup complete
Oct 09 09:32:02 compute-1 systemd[1]: Finished Network Manager Wait Online.
Oct 09 09:32:02 compute-1 systemd[1]: Starting Cloud-init: Network Stage...
Oct 09 09:32:02 compute-1 systemd[1]: Starting Authorization Manager...
Oct 09 09:32:02 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 09 09:32:02 compute-1 polkitd[1120]: Started polkitd version 0.117
Oct 09 09:32:02 compute-1 polkitd[1120]: Loading rules from directory /etc/polkit-1/rules.d
Oct 09 09:32:02 compute-1 polkitd[1120]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 09 09:32:02 compute-1 polkitd[1120]: Finished loading, compiling and executing 3 rules
Oct 09 09:32:02 compute-1 systemd[1]: Started Authorization Manager.
Oct 09 09:32:02 compute-1 polkitd[1120]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 09 09:32:02 compute-1 cloud-init[1206]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 09 Oct 2025 09:32:02 +0000. Up 6.49 seconds.
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: +++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   Device   |   Up  |     Address     |      Mask     | Scope  |     Hw-Address    |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   br-ex    |  True | 192.168.122.101 | 255.255.255.0 | global | fa:16:3e:ab:2d:10 |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |    eth0    |  True |  192.168.26.45  | 255.255.255.0 | global | fa:16:3e:c5:2c:a9 |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |    eth1    |  True |        .        |       .       |   .    | fa:16:3e:ab:2d:10 |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |     lo     |  True |    127.0.0.1    |   255.0.0.0   |  host  |         .         |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |     lo     |  True |     ::1/128     |       .       |  host  |         .         |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: | ovs-system | False |        .        |       .       |   .    | ca:95:18:eb:be:48 |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   vlan20   |  True |   172.17.0.101  | 255.255.255.0 | global | 16:87:93:f4:28:e0 |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   vlan21   |  True |   172.18.0.101  | 255.255.255.0 | global | 96:b6:84:03:13:9c |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   vlan22   |  True |   172.19.0.101  | 255.255.255.0 | global | f6:8a:0c:d7:02:85 |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   vlan23   |  True |   172.20.0.101  | 255.255.255.0 | global | f2:e3:a0:5a:63:12 |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   2   |    172.17.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan20  |   U   |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   3   |    172.18.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan21  |   U   |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   4   |    172.19.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan22  |   U   |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   5   |    172.20.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan23  |   U   |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   6   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   7   |  192.168.122.0  |   0.0.0.0    |  255.255.255.0  |   br-ex   |   U   |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: |   2   |  multicast  |    ::   |    eth1   |   U   |
Oct 09 09:32:02 compute-1 cloud-init[1206]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 09 09:32:03 compute-1 systemd[1]: Finished Cloud-init: Network Stage.
Oct 09 09:32:03 compute-1 systemd[1]: Reached target Cloud-config availability.
Oct 09 09:32:03 compute-1 systemd[1]: Reached target Network is Online.
Oct 09 09:32:03 compute-1 systemd[1]: Starting Cloud-init: Config Stage...
Oct 09 09:32:03 compute-1 systemd[1]: Starting EDPM Container Shutdown...
Oct 09 09:32:03 compute-1 systemd[1]: Starting Notify NFS peers of a restart...
Oct 09 09:32:03 compute-1 systemd[1]: Starting System Logging Service...
Oct 09 09:32:03 compute-1 sm-notify[1240]: Version 2.5.4 starting
Oct 09 09:32:03 compute-1 systemd[1]: Starting OpenSSH server daemon...
Oct 09 09:32:03 compute-1 systemd[1]: Starting Permit User Sessions...
Oct 09 09:32:03 compute-1 systemd[1]: Finished EDPM Container Shutdown.
Oct 09 09:32:03 compute-1 systemd[1]: Started Notify NFS peers of a restart.
Oct 09 09:32:03 compute-1 systemd[1]: Finished Permit User Sessions.
Oct 09 09:32:03 compute-1 sshd[1242]: Server listening on 0.0.0.0 port 22.
Oct 09 09:32:03 compute-1 sshd[1242]: Server listening on :: port 22.
Oct 09 09:32:03 compute-1 systemd[1]: Started Command Scheduler.
Oct 09 09:32:03 compute-1 systemd[1]: Started Getty on tty1.
Oct 09 09:32:03 compute-1 crond[1244]: (CRON) STARTUP (1.5.7)
Oct 09 09:32:03 compute-1 crond[1244]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 09 09:32:03 compute-1 crond[1244]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 66% if used.)
Oct 09 09:32:03 compute-1 crond[1244]: (CRON) INFO (running with inotify support)
Oct 09 09:32:03 compute-1 rsyslogd[1241]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1241" x-info="https://www.rsyslog.com"] start
Oct 09 09:32:03 compute-1 systemd[1]: Started Serial Getty on ttyS0.
Oct 09 09:32:03 compute-1 systemd[1]: Reached target Login Prompts.
Oct 09 09:32:03 compute-1 systemd[1]: Started OpenSSH server daemon.
Oct 09 09:32:03 compute-1 systemd[1]: Started System Logging Service.
Oct 09 09:32:03 compute-1 systemd[1]: Reached target Multi-User System.
Oct 09 09:32:03 compute-1 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 09 09:32:03 compute-1 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 09 09:32:03 compute-1 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 09 09:32:03 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:32:03 compute-1 cloud-init[1254]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 09 Oct 2025 09:32:03 +0000. Up 6.95 seconds.
Oct 09 09:32:03 compute-1 systemd[1]: Finished Cloud-init: Config Stage.
Oct 09 09:32:03 compute-1 systemd[1]: Starting Cloud-init: Final Stage...
Oct 09 09:32:03 compute-1 cloud-init[1258]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 09 Oct 2025 09:32:03 +0000. Up 7.26 seconds.
Oct 09 09:32:03 compute-1 cloud-init[1258]: Cloud-init v. 24.4-7.el9 finished at Thu, 09 Oct 2025 09:32:03 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 7.30 seconds
Oct 09 09:32:03 compute-1 systemd[1]: Finished Cloud-init: Final Stage.
Oct 09 09:32:03 compute-1 systemd[1]: Reached target Cloud-init target.
Oct 09 09:32:03 compute-1 systemd[1]: Startup finished in 1.378s (kernel) + 1.896s (initrd) + 4.074s (userspace) = 7.350s.
Oct 09 09:32:11 compute-1 irqbalance[794]: Cannot change IRQ 45 affinity: Operation not permitted
Oct 09 09:32:11 compute-1 irqbalance[794]: IRQ 45 affinity is now unmanaged
Oct 09 09:32:11 compute-1 irqbalance[794]: Cannot change IRQ 43 affinity: Operation not permitted
Oct 09 09:32:11 compute-1 irqbalance[794]: IRQ 43 affinity is now unmanaged
Oct 09 09:32:11 compute-1 irqbalance[794]: Cannot change IRQ 42 affinity: Operation not permitted
Oct 09 09:32:11 compute-1 irqbalance[794]: IRQ 42 affinity is now unmanaged
Oct 09 09:32:12 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 09 09:32:32 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 09 09:32:51 compute-1 sshd-session[1264]: Accepted publickey for zuul from 192.168.122.30 port 53604 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:32:51 compute-1 systemd[1]: Created slice User Slice of UID 1000.
Oct 09 09:32:51 compute-1 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 09 09:32:51 compute-1 systemd-logind[798]: New session 1 of user zuul.
Oct 09 09:32:51 compute-1 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 09 09:32:51 compute-1 systemd[1]: Starting User Manager for UID 1000...
Oct 09 09:32:51 compute-1 systemd[1268]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:32:52 compute-1 systemd[1268]: Queued start job for default target Main User Target.
Oct 09 09:32:52 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:32:52 compute-1 systemd[1268]: Created slice User Application Slice.
Oct 09 09:32:52 compute-1 systemd[1268]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 09:32:52 compute-1 systemd[1268]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 09:32:52 compute-1 systemd[1268]: Reached target Paths.
Oct 09 09:32:52 compute-1 systemd[1268]: Reached target Timers.
Oct 09 09:32:52 compute-1 systemd[1268]: Starting D-Bus User Message Bus Socket...
Oct 09 09:32:52 compute-1 systemd[1268]: Starting Create User's Volatile Files and Directories...
Oct 09 09:32:52 compute-1 systemd[1268]: Listening on D-Bus User Message Bus Socket.
Oct 09 09:32:52 compute-1 systemd[1268]: Finished Create User's Volatile Files and Directories.
Oct 09 09:32:52 compute-1 systemd[1268]: Reached target Sockets.
Oct 09 09:32:52 compute-1 systemd[1268]: Reached target Basic System.
Oct 09 09:32:52 compute-1 systemd[1268]: Reached target Main User Target.
Oct 09 09:32:52 compute-1 systemd[1268]: Startup finished in 85ms.
Oct 09 09:32:52 compute-1 systemd[1]: Started User Manager for UID 1000.
Oct 09 09:32:52 compute-1 systemd[1]: Started Session 1 of User zuul.
Oct 09 09:32:52 compute-1 sshd-session[1264]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:32:52 compute-1 sudo[1310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipjwfrommuphkgxymsezranbchcxxuwa ; cat /proc/sys/kernel/random/boot_id'
Oct 09 09:32:52 compute-1 sudo[1310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:32:52 compute-1 sudo[1310]: pam_unix(sudo:session): session closed for user root
Oct 09 09:32:52 compute-1 sudo[1339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxvrgeejcqavzmyhwfkvdrmtxwoapamq ; whoami'
Oct 09 09:32:52 compute-1 sudo[1339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:32:52 compute-1 sudo[1339]: pam_unix(sudo:session): session closed for user root
Oct 09 09:32:52 compute-1 sudo[1491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgjtofqjwqjeosbolgtvadanpukeuzae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002371.6748188-336-82523463770078/AnsiballZ_file.py'
Oct 09 09:32:52 compute-1 sudo[1491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:32:52 compute-1 python3.9[1493]: ansible-ansible.builtin.file Invoked with path=/var/lib/openstack/reboot_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:32:52 compute-1 sudo[1491]: pam_unix(sudo:session): session closed for user root
Oct 09 09:32:53 compute-1 sshd-session[1283]: Connection closed by 192.168.122.30 port 53604
Oct 09 09:32:53 compute-1 sshd-session[1264]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:32:53 compute-1 systemd-logind[798]: Session 1 logged out. Waiting for processes to exit.
Oct 09 09:32:53 compute-1 systemd[1]: session-1.scope: Deactivated successfully.
Oct 09 09:32:53 compute-1 systemd-logind[798]: Removed session 1.
Oct 09 09:32:59 compute-1 sshd-session[1518]: Accepted publickey for zuul from 192.168.26.46 port 40706 ssh2: RSA SHA256:v7VHW1cDs9OvF+ufrkmS713ZRHIh0wMGaPvplRsZw/E
Oct 09 09:32:59 compute-1 systemd-logind[798]: New session 3 of user zuul.
Oct 09 09:32:59 compute-1 systemd[1]: Started Session 3 of User zuul.
Oct 09 09:32:59 compute-1 sshd-session[1518]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:32:59 compute-1 sudo[1594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wundtbjjpwwhoqqfuyrawwkjquxrrurk ; /usr/bin/python3'
Oct 09 09:32:59 compute-1 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:32:59 compute-1 useradd[1598]: new group: name=ceph-admin, GID=42478
Oct 09 09:32:59 compute-1 useradd[1598]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 09 09:32:59 compute-1 sudo[1594]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:00 compute-1 sudo[1680]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnpduxdgthhxzlabivmnhjeavckvemjj ; /usr/bin/python3'
Oct 09 09:33:00 compute-1 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:00 compute-1 sudo[1680]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:00 compute-1 sudo[1753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adnyqokfmbtzwdzcsfzyisxbcpmsatet ; /usr/bin/python3'
Oct 09 09:33:00 compute-1 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:00 compute-1 sudo[1753]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:00 compute-1 sudo[1803]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqosgtwrrfxywuluovmpjblluwhsargq ; /usr/bin/python3'
Oct 09 09:33:00 compute-1 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:00 compute-1 sudo[1803]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:01 compute-1 sudo[1829]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjrgixkrwromllrwwnoysulcxztcwljz ; /usr/bin/python3'
Oct 09 09:33:01 compute-1 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:01 compute-1 sudo[1829]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:01 compute-1 sudo[1855]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzyhdnykqbrnqvpnbqgzlieatagelain ; /usr/bin/python3'
Oct 09 09:33:01 compute-1 sudo[1855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:01 compute-1 sudo[1855]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:01 compute-1 sudo[1881]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wudkmwhodfaweqavoxxvdabtfmhiwshj ; /usr/bin/python3'
Oct 09 09:33:01 compute-1 sudo[1881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:01 compute-1 sudo[1881]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:02 compute-1 sudo[1959]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztcnzexjqvglzqmtlsdekxteltvfeqgq ; /usr/bin/python3'
Oct 09 09:33:02 compute-1 sudo[1959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:02 compute-1 sudo[1959]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:02 compute-1 sudo[2032]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yduztztnejzprnqqbpkvibifsfhbwzno ; /usr/bin/python3'
Oct 09 09:33:02 compute-1 sudo[2032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:02 compute-1 sudo[2032]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:02 compute-1 sudo[2134]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-timjlknmqrobgfcqhrgwwagvuhizenet ; /usr/bin/python3'
Oct 09 09:33:02 compute-1 sudo[2134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:02 compute-1 sudo[2134]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:03 compute-1 sudo[2207]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuqvsuijpocoiicnqjusmnmekltqkqih ; /usr/bin/python3'
Oct 09 09:33:03 compute-1 sudo[2207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:03 compute-1 sudo[2207]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:03 compute-1 sudo[2257]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsidsrhbysgdgtzxqyuwptkqmkluezeq ; /usr/bin/python3'
Oct 09 09:33:03 compute-1 sudo[2257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:03 compute-1 python3[2259]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:33:04 compute-1 sudo[2257]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:05 compute-1 sudo[2348]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yshqiknvgcujhovybopahuheahvqjbwd ; /usr/bin/python3'
Oct 09 09:33:05 compute-1 sudo[2348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:05 compute-1 python3[2350]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 09 09:33:06 compute-1 sudo[2348]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:06 compute-1 sudo[2375]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzlmsephtryajvnwnxjefkesgvnfrivu ; /usr/bin/python3'
Oct 09 09:33:06 compute-1 sudo[2375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:06 compute-1 python3[2377]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:33:06 compute-1 sudo[2375]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:06 compute-1 sudo[2401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcapejrlvzfccevvszoieozlmbbpnjtc ; /usr/bin/python3'
Oct 09 09:33:06 compute-1 sudo[2401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:06 compute-1 python3[2403]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                         losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                         lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:33:06 compute-1 kernel: loop: module loaded
Oct 09 09:33:06 compute-1 kernel: loop3: detected capacity change from 0 to 41943040
Oct 09 09:33:06 compute-1 sudo[2401]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:07 compute-1 sudo[2436]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxassfecnnejzspqsbaolboubcntwnoc ; /usr/bin/python3'
Oct 09 09:33:07 compute-1 sudo[2436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:07 compute-1 python3[2438]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                         vgcreate ceph_vg0 /dev/loop3
                                         lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                         lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:33:07 compute-1 lvm[2441]: PV /dev/loop3 not used.
Oct 09 09:33:07 compute-1 lvm[2443]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:33:07 compute-1 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 09 09:33:07 compute-1 lvm[2452]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:33:07 compute-1 lvm[2452]: VG ceph_vg0 finished
Oct 09 09:33:07 compute-1 lvm[2449]:   1 logical volume(s) in volume group "ceph_vg0" now active
Oct 09 09:33:07 compute-1 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 09 09:33:07 compute-1 sudo[2436]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:07 compute-1 sudo[2528]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tomrwzfmyljcxsvedqpcavtapjllurtn ; /usr/bin/python3'
Oct 09 09:33:07 compute-1 sudo[2528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:07 compute-1 python3[2530]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 09 09:33:07 compute-1 sudo[2528]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:07 compute-1 sudo[2601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipqsgirahpfcarjsmbpjnvkbjcvqtucc ; /usr/bin/python3'
Oct 09 09:33:07 compute-1 sudo[2601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:07 compute-1 python3[2603]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002386.786316-33834-252251040756258/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:33:07 compute-1 sudo[2601]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:08 compute-1 sudo[2651]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgluzdqqyfmmoycieoniwkjetngulekh ; /usr/bin/python3'
Oct 09 09:33:08 compute-1 sudo[2651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:08 compute-1 python3[2653]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:33:08 compute-1 systemd[1]: Reloading.
Oct 09 09:33:08 compute-1 systemd-sysv-generator[2679]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:33:08 compute-1 systemd-rc-local-generator[2676]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:33:08 compute-1 systemd[1]: Starting Ceph OSD losetup...
Oct 09 09:33:08 compute-1 bash[2693]: /dev/loop3: [64513]:4194935 (/var/lib/ceph-osd-0.img)
Oct 09 09:33:08 compute-1 systemd[1]: Finished Ceph OSD losetup.
Oct 09 09:33:08 compute-1 lvm[2694]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:33:08 compute-1 lvm[2694]: VG ceph_vg0 finished
Oct 09 09:33:08 compute-1 sudo[2651]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:10 compute-1 python3[2718]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:34:15 compute-1 sshd-session[2762]: Accepted publickey for ceph-admin from 192.168.122.100 port 38588 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:15 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Oct 09 09:34:15 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 09 09:34:15 compute-1 systemd-logind[798]: New session 4 of user ceph-admin.
Oct 09 09:34:15 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 09 09:34:15 compute-1 systemd[1]: Starting User Manager for UID 42477...
Oct 09 09:34:15 compute-1 systemd[2766]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:15 compute-1 systemd[2766]: Queued start job for default target Main User Target.
Oct 09 09:34:15 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:34:15 compute-1 systemd[2766]: Created slice User Application Slice.
Oct 09 09:34:15 compute-1 systemd[2766]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 09:34:15 compute-1 systemd[2766]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 09:34:15 compute-1 systemd[2766]: Reached target Paths.
Oct 09 09:34:15 compute-1 systemd[2766]: Reached target Timers.
Oct 09 09:34:15 compute-1 systemd[2766]: Starting D-Bus User Message Bus Socket...
Oct 09 09:34:15 compute-1 systemd[2766]: Starting Create User's Volatile Files and Directories...
Oct 09 09:34:15 compute-1 systemd[2766]: Listening on D-Bus User Message Bus Socket.
Oct 09 09:34:15 compute-1 systemd[2766]: Reached target Sockets.
Oct 09 09:34:15 compute-1 systemd[2766]: Finished Create User's Volatile Files and Directories.
Oct 09 09:34:15 compute-1 systemd[2766]: Reached target Basic System.
Oct 09 09:34:15 compute-1 systemd[1]: Started User Manager for UID 42477.
Oct 09 09:34:15 compute-1 systemd[2766]: Reached target Main User Target.
Oct 09 09:34:15 compute-1 systemd[2766]: Startup finished in 75ms.
Oct 09 09:34:15 compute-1 systemd[1]: Started Session 4 of User ceph-admin.
Oct 09 09:34:15 compute-1 sshd-session[2762]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:15 compute-1 sshd-session[2782]: Accepted publickey for ceph-admin from 192.168.122.100 port 38594 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:15 compute-1 systemd-logind[798]: New session 6 of user ceph-admin.
Oct 09 09:34:15 compute-1 systemd[1]: Started Session 6 of User ceph-admin.
Oct 09 09:34:15 compute-1 sshd-session[2782]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:15 compute-1 sudo[2786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:15 compute-1 sudo[2786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:15 compute-1 sudo[2786]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:15 compute-1 sshd-session[2811]: Accepted publickey for ceph-admin from 192.168.122.100 port 38608 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:15 compute-1 systemd-logind[798]: New session 7 of user ceph-admin.
Oct 09 09:34:15 compute-1 systemd[1]: Started Session 7 of User ceph-admin.
Oct 09 09:34:15 compute-1 sshd-session[2811]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:15 compute-1 sudo[2815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Oct 09 09:34:15 compute-1 sudo[2815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:15 compute-1 sudo[2815]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:16 compute-1 sshd-session[2840]: Accepted publickey for ceph-admin from 192.168.122.100 port 38614 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:16 compute-1 systemd-logind[798]: New session 8 of user ceph-admin.
Oct 09 09:34:16 compute-1 systemd[1]: Started Session 8 of User ceph-admin.
Oct 09 09:34:16 compute-1 sshd-session[2840]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:16 compute-1 sudo[2844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Oct 09 09:34:16 compute-1 sudo[2844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:16 compute-1 sudo[2844]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:16 compute-1 sshd-session[2869]: Accepted publickey for ceph-admin from 192.168.122.100 port 38624 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:16 compute-1 systemd-logind[798]: New session 9 of user ceph-admin.
Oct 09 09:34:16 compute-1 systemd[1]: Started Session 9 of User ceph-admin.
Oct 09 09:34:16 compute-1 sshd-session[2869]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:16 compute-1 sudo[2873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:34:16 compute-1 sudo[2873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:16 compute-1 sudo[2873]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:16 compute-1 sshd-session[2898]: Accepted publickey for ceph-admin from 192.168.122.100 port 38628 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:16 compute-1 systemd-logind[798]: New session 10 of user ceph-admin.
Oct 09 09:34:16 compute-1 systemd[1]: Started Session 10 of User ceph-admin.
Oct 09 09:34:16 compute-1 sshd-session[2898]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:16 compute-1 sudo[2902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:34:16 compute-1 sudo[2902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:16 compute-1 sudo[2902]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:16 compute-1 sshd-session[2927]: Accepted publickey for ceph-admin from 192.168.122.100 port 38640 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:16 compute-1 systemd-logind[798]: New session 11 of user ceph-admin.
Oct 09 09:34:16 compute-1 systemd[1]: Started Session 11 of User ceph-admin.
Oct 09 09:34:16 compute-1 sshd-session[2927]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:16 compute-1 sudo[2931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Oct 09 09:34:16 compute-1 sudo[2931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:16 compute-1 sudo[2931]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:17 compute-1 sshd-session[2956]: Accepted publickey for ceph-admin from 192.168.122.100 port 38650 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:17 compute-1 systemd-logind[798]: New session 12 of user ceph-admin.
Oct 09 09:34:17 compute-1 systemd[1]: Started Session 12 of User ceph-admin.
Oct 09 09:34:17 compute-1 sshd-session[2956]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:17 compute-1 sudo[2960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:34:17 compute-1 sudo[2960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:17 compute-1 sudo[2960]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:17 compute-1 sshd-session[2985]: Accepted publickey for ceph-admin from 192.168.122.100 port 38662 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:17 compute-1 systemd-logind[798]: New session 13 of user ceph-admin.
Oct 09 09:34:17 compute-1 systemd[1]: Started Session 13 of User ceph-admin.
Oct 09 09:34:17 compute-1 sshd-session[2985]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:17 compute-1 sudo[2989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Oct 09 09:34:17 compute-1 sudo[2989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:17 compute-1 sudo[2989]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:17 compute-1 sshd-session[3014]: Accepted publickey for ceph-admin from 192.168.122.100 port 38672 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:17 compute-1 systemd-logind[798]: New session 14 of user ceph-admin.
Oct 09 09:34:17 compute-1 systemd[1]: Started Session 14 of User ceph-admin.
Oct 09 09:34:17 compute-1 sshd-session[3014]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:18 compute-1 sshd-session[3041]: Accepted publickey for ceph-admin from 192.168.122.100 port 38686 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:18 compute-1 systemd-logind[798]: New session 15 of user ceph-admin.
Oct 09 09:34:18 compute-1 systemd[1]: Started Session 15 of User ceph-admin.
Oct 09 09:34:18 compute-1 sshd-session[3041]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:18 compute-1 sudo[3045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Oct 09 09:34:18 compute-1 sudo[3045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:18 compute-1 sudo[3045]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:18 compute-1 sshd-session[3070]: Accepted publickey for ceph-admin from 192.168.122.100 port 38698 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:18 compute-1 systemd-logind[798]: New session 16 of user ceph-admin.
Oct 09 09:34:18 compute-1 systemd[1]: Started Session 16 of User ceph-admin.
Oct 09 09:34:18 compute-1 sshd-session[3070]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:18 compute-1 sudo[3074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Oct 09 09:34:18 compute-1 sudo[3074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat3730103292-merged.mount: Deactivated successfully.
Oct 09 09:34:19 compute-1 kernel: evm: overlay not supported
Oct 09 09:34:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2499023546-merged.mount: Deactivated successfully.
Oct 09 09:34:19 compute-1 podman[3099]: 2025-10-09 09:34:19.086844589 +0000 UTC m=+0.063795404 system refresh
Oct 09 09:34:19 compute-1 sudo[3074]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:19 compute-1 sudo[3120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:19 compute-1 sudo[3120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:19 compute-1 sudo[3120]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:19 compute-1 sudo[3145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 09 09:34:19 compute-1 sudo[3145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:19 compute-1 sudo[3145]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:19 compute-1 sudo[3187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:19 compute-1 sudo[3187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:19 compute-1 sudo[3187]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:19 compute-1 sudo[3212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:34:19 compute-1 sudo[3212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:19 compute-1 sudo[3212]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:19 compute-1 sudo[3267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:19 compute-1 sudo[3267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:19 compute-1 sudo[3267]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:19 compute-1 sudo[3292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:34:19 compute-1 sudo[3292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:20 compute-1 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 3327 (sysctl)
Oct 09 09:34:20 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:34:20 compute-1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 09 09:34:20 compute-1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 09 09:34:20 compute-1 sudo[3292]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:20 compute-1 sudo[3348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:20 compute-1 sudo[3348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:20 compute-1 sudo[3348]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:20 compute-1 sudo[3373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 09 09:34:20 compute-1 sudo[3373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:20 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:34:20 compute-1 sudo[3373]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:20 compute-1 sudo[3414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:20 compute-1 sudo[3414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:20 compute-1 sudo[3414]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:20 compute-1 sudo[3439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -- inventory --format=json-pretty --filter-for-batch
Oct 09 09:34:20 compute-1 sudo[3439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:21 compute-1 chronyd[805]: Selected source 69.176.84.79 (pool.ntp.org)
Oct 09 09:34:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat1831674394-merged.mount: Deactivated successfully.
Oct 09 09:34:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat1831674394-lower\x2dmapped.mount: Deactivated successfully.
Oct 09 09:34:36 compute-1 podman[3494]: 2025-10-09 09:34:36.34041738 +0000 UTC m=+15.457803419 container create f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:34:36 compute-1 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 09 09:34:36 compute-1 systemd[1]: Started libpod-conmon-f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50.scope.
Oct 09 09:34:36 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:34:36 compute-1 podman[3494]: 2025-10-09 09:34:36.400912381 +0000 UTC m=+15.518298420 container init f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_dhawan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 09 09:34:36 compute-1 podman[3494]: 2025-10-09 09:34:36.405921004 +0000 UTC m=+15.523307032 container start f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:34:36 compute-1 podman[3494]: 2025-10-09 09:34:36.407478059 +0000 UTC m=+15.524864088 container attach f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_dhawan, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Oct 09 09:34:36 compute-1 vigilant_dhawan[3543]: 167 167
Oct 09 09:34:36 compute-1 systemd[1]: libpod-f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50.scope: Deactivated successfully.
Oct 09 09:34:36 compute-1 conmon[3543]: conmon f560b0ef18eaa34626eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50.scope/container/memory.events
Oct 09 09:34:36 compute-1 podman[3494]: 2025-10-09 09:34:36.411413538 +0000 UTC m=+15.528799567 container died f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:34:36 compute-1 podman[3494]: 2025-10-09 09:34:36.329357149 +0000 UTC m=+15.446743178 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-64f59aba5bdcf65f69169a604275c90d8c09628f437d754203486f296b26fb31-merged.mount: Deactivated successfully.
Oct 09 09:34:36 compute-1 podman[3494]: 2025-10-09 09:34:36.427175573 +0000 UTC m=+15.544561602 container remove f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Oct 09 09:34:36 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:34:36 compute-1 systemd[1]: libpod-conmon-f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50.scope: Deactivated successfully.
Oct 09 09:34:36 compute-1 podman[3565]: 2025-10-09 09:34:36.535332029 +0000 UTC m=+0.025767050 container create 37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_goodall, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 09 09:34:36 compute-1 systemd[1]: Started libpod-conmon-37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d.scope.
Oct 09 09:34:36 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:34:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b7db690092db157f48b25b619a56e1d25d854fd54a899790e353154bee6a36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b7db690092db157f48b25b619a56e1d25d854fd54a899790e353154bee6a36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:36 compute-1 podman[3565]: 2025-10-09 09:34:36.578165601 +0000 UTC m=+0.068600622 container init 37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_goodall, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Oct 09 09:34:36 compute-1 podman[3565]: 2025-10-09 09:34:36.582851505 +0000 UTC m=+0.073286527 container start 37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_goodall, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 09 09:34:36 compute-1 podman[3565]: 2025-10-09 09:34:36.583857121 +0000 UTC m=+0.074292142 container attach 37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_goodall, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 09 09:34:36 compute-1 podman[3565]: 2025-10-09 09:34:36.524774775 +0000 UTC m=+0.015209816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]: [
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:     {
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:         "available": false,
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:         "being_replaced": false,
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:         "ceph_device_lvm": false,
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:         "lsm_data": {},
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:         "lvs": [],
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:         "path": "/dev/sr0",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:         "rejected_reasons": [
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "Insufficient space (<5GB)",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "Has a FileSystem"
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:         ],
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:         "sys_api": {
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "actuators": null,
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "device_nodes": [
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:                 "sr0"
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             ],
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "devname": "sr0",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "human_readable_size": "474.00 KB",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "id_bus": "ata",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "model": "QEMU DVD-ROM",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "nr_requests": "64",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "parent": "/dev/sr0",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "partitions": {},
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "path": "/dev/sr0",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "removable": "1",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "rev": "2.5+",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "ro": "0",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "rotational": "0",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "sas_address": "",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "sas_device_handle": "",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "scheduler_mode": "mq-deadline",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "sectors": 0,
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "sectorsize": "2048",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "size": 485376.0,
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "support_discard": "2048",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "type": "disk",
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:             "vendor": "QEMU"
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:         }
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]:     }
Oct 09 09:34:37 compute-1 suspicious_goodall[3578]: ]
Oct 09 09:34:37 compute-1 systemd[1]: libpod-37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d.scope: Deactivated successfully.
Oct 09 09:34:37 compute-1 podman[4438]: 2025-10-09 09:34:37.118299463 +0000 UTC m=+0.018126044 container died 37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 09 09:34:37 compute-1 podman[4438]: 2025-10-09 09:34:37.135118732 +0000 UTC m=+0.034945302 container remove 37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 09 09:34:37 compute-1 systemd[1]: libpod-conmon-37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d.scope: Deactivated successfully.
Oct 09 09:34:37 compute-1 sudo[3439]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:34:37 compute-1 sudo[4449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4449]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:34:37 compute-1 sudo[4474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4474]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:34:37 compute-1 sudo[4499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4499]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:34:37 compute-1 sudo[4524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:34:37 compute-1 sudo[4524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4524]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:34:37 compute-1 sudo[4549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4549]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:34:37 compute-1 sudo[4597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4597]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:34:37 compute-1 sudo[4622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4622]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 09 09:34:37 compute-1 sudo[4647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4647]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:34:37 compute-1 sudo[4672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4672]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:34:37 compute-1 sudo[4697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4697]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:34:37 compute-1 sudo[4722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4722]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:34:37 compute-1 sudo[4747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4747]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:34:37 compute-1 sudo[4772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4772]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:34:37 compute-1 sudo[4820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4820]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:34:37 compute-1 sudo[4845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4845]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:34:37 compute-1 sudo[4870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4870]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:34:37 compute-1 sudo[4895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4895]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:37 compute-1 sudo[4920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:34:37 compute-1 sudo[4920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:37 compute-1 sudo[4920]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[4945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:34:38 compute-1 sudo[4945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[4945]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[4970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:34:38 compute-1 sudo[4970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[4970]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[4995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:34:38 compute-1 sudo[4995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[4995]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[5043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:34:38 compute-1 sudo[5043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[5043]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[5068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:34:38 compute-1 sudo[5068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[5068]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[5093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 09 09:34:38 compute-1 sudo[5093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[5093]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[5118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:34:38 compute-1 sudo[5118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[5118]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[5143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:34:38 compute-1 sudo[5143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[5143]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[5168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:34:38 compute-1 sudo[5168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[5168]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[5193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:34:38 compute-1 sudo[5193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[5193]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[5218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:34:38 compute-1 sudo[5218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[5218]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[5266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:34:38 compute-1 sudo[5266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[5266]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[5291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:34:38 compute-1 sudo[5291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[5291]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[5316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:34:38 compute-1 sudo[5316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[5316]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[5341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:38 compute-1 sudo[5341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 sudo[5341]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:38 compute-1 sudo[5366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:34:38 compute-1 sudo[5366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:38 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:34:38 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:34:38 compute-1 podman[5425]: 2025-10-09 09:34:38.938804981 +0000 UTC m=+0.026047859 container create e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lederberg, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:34:38 compute-1 systemd[1]: Started libpod-conmon-e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65.scope.
Oct 09 09:34:38 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:34:38 compute-1 podman[5425]: 2025-10-09 09:34:38.980159545 +0000 UTC m=+0.067402423 container init e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lederberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:34:38 compute-1 podman[5425]: 2025-10-09 09:34:38.984664499 +0000 UTC m=+0.071907377 container start e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 09 09:34:38 compute-1 podman[5425]: 2025-10-09 09:34:38.9856778 +0000 UTC m=+0.072920677 container attach e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lederberg, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 09 09:34:38 compute-1 optimistic_lederberg[5440]: 167 167
Oct 09 09:34:38 compute-1 systemd[1]: libpod-e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65.scope: Deactivated successfully.
Oct 09 09:34:38 compute-1 conmon[5440]: conmon e8cf1ff3116329a4f379 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65.scope/container/memory.events
Oct 09 09:34:38 compute-1 podman[5425]: 2025-10-09 09:34:38.988637168 +0000 UTC m=+0.075880047 container died e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lederberg, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 09 09:34:39 compute-1 podman[5425]: 2025-10-09 09:34:39.005127388 +0000 UTC m=+0.092370265 container remove e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 09 09:34:39 compute-1 podman[5425]: 2025-10-09 09:34:38.928554656 +0000 UTC m=+0.015797554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:39 compute-1 systemd[1]: libpod-conmon-e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65.scope: Deactivated successfully.
Oct 09 09:34:39 compute-1 systemd[1]: Reloading.
Oct 09 09:34:39 compute-1 systemd-sysv-generator[5479]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:34:39 compute-1 systemd-rc-local-generator[5476]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:34:39 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:34:39 compute-1 systemd[1]: Reloading.
Oct 09 09:34:39 compute-1 systemd-rc-local-generator[5513]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:34:39 compute-1 systemd-sysv-generator[5516]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:34:39 compute-1 systemd[1]: Reached target All Ceph clusters and services.
Oct 09 09:34:39 compute-1 systemd[1]: Reloading.
Oct 09 09:34:39 compute-1 systemd-rc-local-generator[5550]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:34:39 compute-1 systemd-sysv-generator[5553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:34:39 compute-1 systemd[1]: Reached target Ceph cluster 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:34:39 compute-1 systemd[1]: Reloading.
Oct 09 09:34:39 compute-1 systemd-rc-local-generator[5588]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:34:39 compute-1 systemd-sysv-generator[5591]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:34:39 compute-1 systemd[1]: Reloading.
Oct 09 09:34:39 compute-1 systemd-rc-local-generator[5630]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:34:39 compute-1 systemd-sysv-generator[5634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:34:40 compute-1 systemd[1]: Created slice Slice /system/ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:34:40 compute-1 systemd[1]: Reached target System Time Set.
Oct 09 09:34:40 compute-1 systemd[1]: Reached target System Time Synchronized.
Oct 09 09:34:40 compute-1 systemd[1]: Starting Ceph crash.compute-1 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:34:40 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:34:40 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:34:40 compute-1 podman[5685]: 2025-10-09 09:34:40.185214568 +0000 UTC m=+0.027935948 container create cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a34cb825b3e725a1b98572d527776d2dea4d19288afb174cbe8c9fc36d3db02/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a34cb825b3e725a1b98572d527776d2dea4d19288afb174cbe8c9fc36d3db02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a34cb825b3e725a1b98572d527776d2dea4d19288afb174cbe8c9fc36d3db02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:40 compute-1 podman[5685]: 2025-10-09 09:34:40.225106926 +0000 UTC m=+0.067828296 container init cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:34:40 compute-1 podman[5685]: 2025-10-09 09:34:40.228589261 +0000 UTC m=+0.071310631 container start cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 09 09:34:40 compute-1 bash[5685]: cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a
Oct 09 09:34:40 compute-1 podman[5685]: 2025-10-09 09:34:40.173430472 +0000 UTC m=+0.016151863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:40 compute-1 systemd[1]: Started Ceph crash.compute-1 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:34:40 compute-1 sudo[5366]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 09 09:34:40 compute-1 sudo[5704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:40 compute-1 sudo[5704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:40 compute-1 sudo[5704]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: 2025-10-09T09:34:40.341+0000 7fd3e181f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 09 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: 2025-10-09T09:34:40.341+0000 7fd3e181f640 -1 AuthRegistry(0x7fd3dc0698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 09 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: 2025-10-09T09:34:40.342+0000 7fd3e181f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 09 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: 2025-10-09T09:34:40.342+0000 7fd3e181f640 -1 AuthRegistry(0x7fd3e181dff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 09 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: 2025-10-09T09:34:40.343+0000 7fd3daffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 09 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: 2025-10-09T09:34:40.343+0000 7fd3e181f640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 09 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 09 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 09 09:34:40 compute-1 sudo[5729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 09 09:34:40 compute-1 sudo[5729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:40 compute-1 podman[5795]: 2025-10-09 09:34:40.625006708 +0000 UTC m=+0.024383630 container create a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_fermi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:34:40 compute-1 systemd[1]: Started libpod-conmon-a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35.scope.
Oct 09 09:34:40 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:34:40 compute-1 podman[5795]: 2025-10-09 09:34:40.675454876 +0000 UTC m=+0.074831828 container init a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_fermi, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 09 09:34:40 compute-1 podman[5795]: 2025-10-09 09:34:40.679735898 +0000 UTC m=+0.079112830 container start a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_fermi, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 09 09:34:40 compute-1 podman[5795]: 2025-10-09 09:34:40.6807656 +0000 UTC m=+0.080142542 container attach a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_fermi, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 09 09:34:40 compute-1 busy_fermi[5808]: 167 167
Oct 09 09:34:40 compute-1 systemd[1]: libpod-a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35.scope: Deactivated successfully.
Oct 09 09:34:40 compute-1 podman[5795]: 2025-10-09 09:34:40.683582689 +0000 UTC m=+0.082959622 container died a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_fermi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 09:34:40 compute-1 podman[5795]: 2025-10-09 09:34:40.698635298 +0000 UTC m=+0.098012231 container remove a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_fermi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:34:40 compute-1 podman[5795]: 2025-10-09 09:34:40.615245446 +0000 UTC m=+0.014622378 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:40 compute-1 systemd[1]: libpod-conmon-a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35.scope: Deactivated successfully.
Oct 09 09:34:40 compute-1 podman[5830]: 2025-10-09 09:34:40.802430751 +0000 UTC m=+0.024331313 container create 519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_torvalds, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:34:40 compute-1 systemd[1]: Started libpod-conmon-519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5.scope.
Oct 09 09:34:40 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8d451fd393b71d71166de3311d16999bd73f7b5d8f7d2a7ecf996590b8df0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8d451fd393b71d71166de3311d16999bd73f7b5d8f7d2a7ecf996590b8df0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8d451fd393b71d71166de3311d16999bd73f7b5d8f7d2a7ecf996590b8df0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8d451fd393b71d71166de3311d16999bd73f7b5d8f7d2a7ecf996590b8df0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8d451fd393b71d71166de3311d16999bd73f7b5d8f7d2a7ecf996590b8df0f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:40 compute-1 podman[5830]: 2025-10-09 09:34:40.853290897 +0000 UTC m=+0.075191458 container init 519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_torvalds, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 09 09:34:40 compute-1 podman[5830]: 2025-10-09 09:34:40.858992126 +0000 UTC m=+0.080892687 container start 519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_torvalds, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 09 09:34:40 compute-1 podman[5830]: 2025-10-09 09:34:40.86015628 +0000 UTC m=+0.082056841 container attach 519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_torvalds, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:34:40 compute-1 podman[5830]: 2025-10-09 09:34:40.792421973 +0000 UTC m=+0.014322554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:41 compute-1 exciting_torvalds[5843]: --> passed data devices: 0 physical, 1 LVM
Oct 09 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6a6825df-a8f3-41ad-b7ed-1604f01d2f74
Oct 09 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct 09 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 09 09:34:41 compute-1 lvm[5904]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:34:41 compute-1 lvm[5904]: VG ceph_vg0 finished
Oct 09 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 09 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct 09 09:34:41 compute-1 exciting_torvalds[5843]:  stderr: got monmap epoch 1
Oct 09 09:34:41 compute-1 exciting_torvalds[5843]: --> Creating keyring file for osd.0
Oct 09 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct 09 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct 09 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 6a6825df-a8f3-41ad-b7ed-1604f01d2f74 --setuser ceph --setgroup ceph
Oct 09 09:34:44 compute-1 exciting_torvalds[5843]:  stderr: 2025-10-09T09:34:41.946+0000 7f38f1678740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Oct 09 09:34:44 compute-1 exciting_torvalds[5843]:  stderr: 2025-10-09T09:34:42.208+0000 7f38f1678740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct 09 09:34:44 compute-1 exciting_torvalds[5843]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 09 09:34:44 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 09 09:34:44 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 09 09:34:45 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:45 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:45 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 09 09:34:45 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 09 09:34:45 compute-1 exciting_torvalds[5843]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 09 09:34:45 compute-1 exciting_torvalds[5843]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 09 09:34:45 compute-1 systemd[1]: libpod-519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5.scope: Deactivated successfully.
Oct 09 09:34:45 compute-1 systemd[1]: libpod-519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5.scope: Consumed 1.367s CPU time.
Oct 09 09:34:45 compute-1 podman[5830]: 2025-10-09 09:34:45.223812921 +0000 UTC m=+4.445713492 container died 519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_torvalds, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 09 09:34:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-5e8d451fd393b71d71166de3311d16999bd73f7b5d8f7d2a7ecf996590b8df0f-merged.mount: Deactivated successfully.
Oct 09 09:34:45 compute-1 podman[5830]: 2025-10-09 09:34:45.244104055 +0000 UTC m=+4.466004616 container remove 519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Oct 09 09:34:45 compute-1 systemd[1]: libpod-conmon-519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5.scope: Deactivated successfully.
Oct 09 09:34:45 compute-1 sudo[5729]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:45 compute-1 sudo[6826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:45 compute-1 sudo[6826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:45 compute-1 sudo[6826]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:45 compute-1 sudo[6851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -- lvm list --format json
Oct 09 09:34:45 compute-1 sudo[6851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:45 compute-1 podman[6906]: 2025-10-09 09:34:45.59559335 +0000 UTC m=+0.023209077 container create ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brattain, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 09 09:34:45 compute-1 systemd[1]: Started libpod-conmon-ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c.scope.
Oct 09 09:34:45 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:34:45 compute-1 podman[6906]: 2025-10-09 09:34:45.643143255 +0000 UTC m=+0.070758992 container init ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brattain, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 09 09:34:45 compute-1 podman[6906]: 2025-10-09 09:34:45.647471766 +0000 UTC m=+0.075087492 container start ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 09 09:34:45 compute-1 podman[6906]: 2025-10-09 09:34:45.648399003 +0000 UTC m=+0.076014730 container attach ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brattain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 09 09:34:45 compute-1 charming_brattain[6920]: 167 167
Oct 09 09:34:45 compute-1 systemd[1]: libpod-ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c.scope: Deactivated successfully.
Oct 09 09:34:45 compute-1 conmon[6920]: conmon ec9002e816dbe69d9699 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c.scope/container/memory.events
Oct 09 09:34:45 compute-1 podman[6906]: 2025-10-09 09:34:45.650991992 +0000 UTC m=+0.078607719 container died ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brattain, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 09 09:34:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-1d06b7f5026899de72e7403e102d1005a0547192e29bfe6b088f0a45fe3b0e12-merged.mount: Deactivated successfully.
Oct 09 09:34:45 compute-1 podman[6906]: 2025-10-09 09:34:45.67097728 +0000 UTC m=+0.098593007 container remove ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:34:45 compute-1 podman[6906]: 2025-10-09 09:34:45.585783606 +0000 UTC m=+0.013399353 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:45 compute-1 systemd[1]: libpod-conmon-ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c.scope: Deactivated successfully.
Oct 09 09:34:45 compute-1 podman[6941]: 2025-10-09 09:34:45.77432581 +0000 UTC m=+0.024441790 container create 58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 09 09:34:45 compute-1 systemd[1]: Started libpod-conmon-58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69.scope.
Oct 09 09:34:45 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:34:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271809d494a473cbaceb2c66cfb7905e3265504b4defac15ab5dde1f2c2e94d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271809d494a473cbaceb2c66cfb7905e3265504b4defac15ab5dde1f2c2e94d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271809d494a473cbaceb2c66cfb7905e3265504b4defac15ab5dde1f2c2e94d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271809d494a473cbaceb2c66cfb7905e3265504b4defac15ab5dde1f2c2e94d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:45 compute-1 podman[6941]: 2025-10-09 09:34:45.812195174 +0000 UTC m=+0.062311154 container init 58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_joliot, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 09 09:34:45 compute-1 podman[6941]: 2025-10-09 09:34:45.817568945 +0000 UTC m=+0.067684926 container start 58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 09 09:34:45 compute-1 podman[6941]: 2025-10-09 09:34:45.818526671 +0000 UTC m=+0.068642652 container attach 58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 09 09:34:45 compute-1 podman[6941]: 2025-10-09 09:34:45.764732574 +0000 UTC m=+0.014848564 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:46 compute-1 tender_joliot[6954]: {
Oct 09 09:34:46 compute-1 tender_joliot[6954]:     "0": [
Oct 09 09:34:46 compute-1 tender_joliot[6954]:         {
Oct 09 09:34:46 compute-1 tender_joliot[6954]:             "devices": [
Oct 09 09:34:46 compute-1 tender_joliot[6954]:                 "/dev/loop3"
Oct 09 09:34:46 compute-1 tender_joliot[6954]:             ],
Oct 09 09:34:46 compute-1 tender_joliot[6954]:             "lv_name": "ceph_lv0",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:             "lv_size": "21470642176",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=HIhrYm-2lBn-uQRn-0mXY-X1mD-O9Ex-kh1Jbh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6a6825df-a8f3-41ad-b7ed-1604f01d2f74,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:             "lv_uuid": "HIhrYm-2lBn-uQRn-0mXY-X1mD-O9Ex-kh1Jbh",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:             "name": "ceph_lv0",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:             "tags": {
Oct 09 09:34:46 compute-1 tender_joliot[6954]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:                 "ceph.block_uuid": "HIhrYm-2lBn-uQRn-0mXY-X1mD-O9Ex-kh1Jbh",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:                 "ceph.cephx_lockbox_secret": "",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:                 "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:                 "ceph.cluster_name": "ceph",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:                 "ceph.crush_device_class": "",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:                 "ceph.encrypted": "0",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:                 "ceph.osd_fsid": "6a6825df-a8f3-41ad-b7ed-1604f01d2f74",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:                 "ceph.osd_id": "0",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:                 "ceph.type": "block",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:                 "ceph.vdo": "0",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:                 "ceph.with_tpm": "0"
Oct 09 09:34:46 compute-1 tender_joliot[6954]:             },
Oct 09 09:34:46 compute-1 tender_joliot[6954]:             "type": "block",
Oct 09 09:34:46 compute-1 tender_joliot[6954]:             "vg_name": "ceph_vg0"
Oct 09 09:34:46 compute-1 tender_joliot[6954]:         }
Oct 09 09:34:46 compute-1 tender_joliot[6954]:     ]
Oct 09 09:34:46 compute-1 tender_joliot[6954]: }
Oct 09 09:34:46 compute-1 systemd[1]: libpod-58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69.scope: Deactivated successfully.
Oct 09 09:34:46 compute-1 podman[6963]: 2025-10-09 09:34:46.067975752 +0000 UTC m=+0.014807027 container died 58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_joliot, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 09 09:34:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-271809d494a473cbaceb2c66cfb7905e3265504b4defac15ab5dde1f2c2e94d8-merged.mount: Deactivated successfully.
Oct 09 09:34:46 compute-1 podman[6963]: 2025-10-09 09:34:46.085440668 +0000 UTC m=+0.032271934 container remove 58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:34:46 compute-1 systemd[1]: libpod-conmon-58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69.scope: Deactivated successfully.
Oct 09 09:34:46 compute-1 sudo[6851]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:46 compute-1 sudo[6973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:46 compute-1 sudo[6973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:46 compute-1 sudo[6973]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:46 compute-1 sudo[6998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:34:46 compute-1 sudo[6998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:46 compute-1 podman[7055]: 2025-10-09 09:34:46.452353991 +0000 UTC m=+0.022414368 container create 370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_elgamal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:34:46 compute-1 systemd[1]: Started libpod-conmon-370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381.scope.
Oct 09 09:34:46 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:34:46 compute-1 podman[7055]: 2025-10-09 09:34:46.502657377 +0000 UTC m=+0.072717754 container init 370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_elgamal, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:34:46 compute-1 podman[7055]: 2025-10-09 09:34:46.506814795 +0000 UTC m=+0.076875162 container start 370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:34:46 compute-1 podman[7055]: 2025-10-09 09:34:46.507852962 +0000 UTC m=+0.077913329 container attach 370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_elgamal, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:34:46 compute-1 stoic_elgamal[7069]: 167 167
Oct 09 09:34:46 compute-1 systemd[1]: libpod-370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381.scope: Deactivated successfully.
Oct 09 09:34:46 compute-1 podman[7055]: 2025-10-09 09:34:46.509796907 +0000 UTC m=+0.079857274 container died 370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_elgamal, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:34:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-729e6b3081340c0c41de3bcc35b8f2593ae1744d26038fcb2bf5d984201db246-merged.mount: Deactivated successfully.
Oct 09 09:34:46 compute-1 podman[7055]: 2025-10-09 09:34:46.528197056 +0000 UTC m=+0.098257423 container remove 370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_elgamal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 09 09:34:46 compute-1 podman[7055]: 2025-10-09 09:34:46.443188232 +0000 UTC m=+0.013248609 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:46 compute-1 systemd[1]: libpod-conmon-370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381.scope: Deactivated successfully.
Oct 09 09:34:46 compute-1 podman[7097]: 2025-10-09 09:34:46.695571994 +0000 UTC m=+0.025933533 container create 3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 09 09:34:46 compute-1 systemd[1]: Started libpod-conmon-3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937.scope.
Oct 09 09:34:46 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:34:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36fc6a6a30d5c8a113d0b2d356baf8d3e8af9207ac825af9943000f6fa6fc134/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36fc6a6a30d5c8a113d0b2d356baf8d3e8af9207ac825af9943000f6fa6fc134/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36fc6a6a30d5c8a113d0b2d356baf8d3e8af9207ac825af9943000f6fa6fc134/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36fc6a6a30d5c8a113d0b2d356baf8d3e8af9207ac825af9943000f6fa6fc134/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36fc6a6a30d5c8a113d0b2d356baf8d3e8af9207ac825af9943000f6fa6fc134/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:46 compute-1 podman[7097]: 2025-10-09 09:34:46.749082246 +0000 UTC m=+0.079443785 container init 3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:34:46 compute-1 podman[7097]: 2025-10-09 09:34:46.755259693 +0000 UTC m=+0.085621221 container start 3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 09 09:34:46 compute-1 podman[7097]: 2025-10-09 09:34:46.756368532 +0000 UTC m=+0.086730061 container attach 3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:34:46 compute-1 podman[7097]: 2025-10-09 09:34:46.68523149 +0000 UTC m=+0.015593038 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test[7110]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Oct 09 09:34:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test[7110]:                             [--no-systemd] [--no-tmpfs]
Oct 09 09:34:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test[7110]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 09 09:34:46 compute-1 systemd[1]: libpod-3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937.scope: Deactivated successfully.
Oct 09 09:34:46 compute-1 podman[7097]: 2025-10-09 09:34:46.902796219 +0000 UTC m=+0.233157749 container died 3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 09 09:34:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-36fc6a6a30d5c8a113d0b2d356baf8d3e8af9207ac825af9943000f6fa6fc134-merged.mount: Deactivated successfully.
Oct 09 09:34:46 compute-1 podman[7097]: 2025-10-09 09:34:46.924130861 +0000 UTC m=+0.254492389 container remove 3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:34:46 compute-1 systemd[1]: libpod-conmon-3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937.scope: Deactivated successfully.
Oct 09 09:34:47 compute-1 systemd[1]: Reloading.
Oct 09 09:34:47 compute-1 systemd-sysv-generator[7168]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:34:47 compute-1 systemd-rc-local-generator[7165]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:34:47 compute-1 systemd[1]: Reloading.
Oct 09 09:34:47 compute-1 systemd-rc-local-generator[7202]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:34:47 compute-1 systemd-sysv-generator[7205]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:34:47 compute-1 systemd[1]: Starting Ceph osd.0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:34:47 compute-1 podman[7259]: 2025-10-09 09:34:47.623795002 +0000 UTC m=+0.025790483 container create a86682fb28e2a9707eeb2ff42eca9f102fa39ae6a9cee571bee3a85f70445d5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 09 09:34:47 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:34:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152a94c57c5918b34c7a0ce1075b979e5983d3a9e3a27148082e03f5573b52c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152a94c57c5918b34c7a0ce1075b979e5983d3a9e3a27148082e03f5573b52c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152a94c57c5918b34c7a0ce1075b979e5983d3a9e3a27148082e03f5573b52c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152a94c57c5918b34c7a0ce1075b979e5983d3a9e3a27148082e03f5573b52c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152a94c57c5918b34c7a0ce1075b979e5983d3a9e3a27148082e03f5573b52c8/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:47 compute-1 podman[7259]: 2025-10-09 09:34:47.664835404 +0000 UTC m=+0.066830875 container init a86682fb28e2a9707eeb2ff42eca9f102fa39ae6a9cee571bee3a85f70445d5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 09 09:34:47 compute-1 podman[7259]: 2025-10-09 09:34:47.670682648 +0000 UTC m=+0.072678119 container start a86682fb28e2a9707eeb2ff42eca9f102fa39ae6a9cee571bee3a85f70445d5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 09 09:34:47 compute-1 podman[7259]: 2025-10-09 09:34:47.672007856 +0000 UTC m=+0.074003327 container attach a86682fb28e2a9707eeb2ff42eca9f102fa39ae6a9cee571bee3a85f70445d5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 09 09:34:47 compute-1 podman[7259]: 2025-10-09 09:34:47.613037491 +0000 UTC m=+0.015032982 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:34:47 compute-1 bash[7259]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:34:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:34:47 compute-1 bash[7259]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:34:48 compute-1 lvm[7353]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:34:48 compute-1 lvm[7353]: VG ceph_vg0 finished
Oct 09 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 09 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:34:48 compute-1 bash[7259]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 09 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:34:48 compute-1 lvm[7357]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:34:48 compute-1 lvm[7357]: VG ceph_vg0 finished
Oct 09 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 09 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 09 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 09 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 09 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 09 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 09 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 09 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 09 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 09 09:34:48 compute-1 bash[7259]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 09 09:34:48 compute-1 systemd[1]: libpod-a86682fb28e2a9707eeb2ff42eca9f102fa39ae6a9cee571bee3a85f70445d5c.scope: Deactivated successfully.
Oct 09 09:34:48 compute-1 podman[7259]: 2025-10-09 09:34:48.591464046 +0000 UTC m=+0.993459527 container died a86682fb28e2a9707eeb2ff42eca9f102fa39ae6a9cee571bee3a85f70445d5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:34:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-152a94c57c5918b34c7a0ce1075b979e5983d3a9e3a27148082e03f5573b52c8-merged.mount: Deactivated successfully.
Oct 09 09:34:48 compute-1 podman[7259]: 2025-10-09 09:34:48.612910358 +0000 UTC m=+1.014905829 container remove a86682fb28e2a9707eeb2ff42eca9f102fa39ae6a9cee571bee3a85f70445d5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 09 09:34:48 compute-1 podman[7497]: 2025-10-09 09:34:48.746618244 +0000 UTC m=+0.025384889 container create de046c66ba96a3549bd259ccfa5eb6fa1b5cdd3b076566d5ad43e142eefce08a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 09 09:34:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649690acaaa59aac072f3cff89da8667629825fe669a07fc59cdcc20a1865138/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649690acaaa59aac072f3cff89da8667629825fe669a07fc59cdcc20a1865138/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649690acaaa59aac072f3cff89da8667629825fe669a07fc59cdcc20a1865138/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649690acaaa59aac072f3cff89da8667629825fe669a07fc59cdcc20a1865138/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649690acaaa59aac072f3cff89da8667629825fe669a07fc59cdcc20a1865138/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:48 compute-1 podman[7497]: 2025-10-09 09:34:48.788129364 +0000 UTC m=+0.066896028 container init de046c66ba96a3549bd259ccfa5eb6fa1b5cdd3b076566d5ad43e142eefce08a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 09 09:34:48 compute-1 podman[7497]: 2025-10-09 09:34:48.792006023 +0000 UTC m=+0.070772667 container start de046c66ba96a3549bd259ccfa5eb6fa1b5cdd3b076566d5ad43e142eefce08a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 09 09:34:48 compute-1 bash[7497]: de046c66ba96a3549bd259ccfa5eb6fa1b5cdd3b076566d5ad43e142eefce08a
Oct 09 09:34:48 compute-1 podman[7497]: 2025-10-09 09:34:48.736069497 +0000 UTC m=+0.014836162 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:48 compute-1 systemd[1]: Started Ceph osd.0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:34:48 compute-1 ceph-osd[7514]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 09:34:48 compute-1 ceph-osd[7514]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Oct 09 09:34:48 compute-1 ceph-osd[7514]: pidfile_write: ignore empty --pid-file
Oct 09 09:34:48 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:48 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:48 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:48 compute-1 sudo[6998]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:48 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) close
Oct 09 09:34:48 compute-1 sudo[7526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:48 compute-1 sudo[7526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:48 compute-1 sudo[7526]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:48 compute-1 sudo[7551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -- raw list --format json
Oct 09 09:34:48 compute-1 sudo[7551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) close
Oct 09 09:34:49 compute-1 podman[7608]: 2025-10-09 09:34:49.187536097 +0000 UTC m=+0.028960078 container create cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:34:49 compute-1 systemd[1]: Started libpod-conmon-cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071.scope.
Oct 09 09:34:49 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:34:49 compute-1 podman[7608]: 2025-10-09 09:34:49.23921702 +0000 UTC m=+0.080641001 container init cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jones, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 09 09:34:49 compute-1 podman[7608]: 2025-10-09 09:34:49.243887063 +0000 UTC m=+0.085311045 container start cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:34:49 compute-1 podman[7608]: 2025-10-09 09:34:49.244932074 +0000 UTC m=+0.086356075 container attach cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jones, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 09:34:49 compute-1 mystifying_jones[7621]: 167 167
Oct 09 09:34:49 compute-1 systemd[1]: libpod-cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071.scope: Deactivated successfully.
Oct 09 09:34:49 compute-1 podman[7608]: 2025-10-09 09:34:49.247854805 +0000 UTC m=+0.089278785 container died cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 09 09:34:49 compute-1 systemd[1]: var-lib-containers-storage-overlay-cb1bd102f2a270647ee9097340e3e70733f6d9f4c0afb09c0c3dac7d5e116e8f-merged.mount: Deactivated successfully.
Oct 09 09:34:49 compute-1 podman[7608]: 2025-10-09 09:34:49.264988525 +0000 UTC m=+0.106412507 container remove cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jones, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:34:49 compute-1 podman[7608]: 2025-10-09 09:34:49.173442586 +0000 UTC m=+0.014866588 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:49 compute-1 systemd[1]: libpod-conmon-cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071.scope: Deactivated successfully.
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) close
Oct 09 09:34:49 compute-1 podman[7642]: 2025-10-09 09:34:49.377733181 +0000 UTC m=+0.030185810 container create 1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chebyshev, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 09 09:34:49 compute-1 systemd[1]: Started libpod-conmon-1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d.scope.
Oct 09 09:34:49 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:34:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a7fa6941b84fe7290ca329d5608d3714a6831e3fb0c027ac0511962e67f88e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a7fa6941b84fe7290ca329d5608d3714a6831e3fb0c027ac0511962e67f88e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a7fa6941b84fe7290ca329d5608d3714a6831e3fb0c027ac0511962e67f88e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a7fa6941b84fe7290ca329d5608d3714a6831e3fb0c027ac0511962e67f88e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:49 compute-1 podman[7642]: 2025-10-09 09:34:49.431953851 +0000 UTC m=+0.084406481 container init 1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:34:49 compute-1 podman[7642]: 2025-10-09 09:34:49.437839758 +0000 UTC m=+0.090292387 container start 1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chebyshev, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:34:49 compute-1 podman[7642]: 2025-10-09 09:34:49.438830487 +0000 UTC m=+0.091283115 container attach 1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chebyshev, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:34:49 compute-1 podman[7642]: 2025-10-09 09:34:49.363583525 +0000 UTC m=+0.016036174 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) close
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) close
Oct 09 09:34:49 compute-1 lvm[7736]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:34:49 compute-1 lvm[7736]: VG ceph_vg0 finished
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991ddc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991ddc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991ddc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 09 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991ddc00 /var/lib/ceph/osd/ceph-0/block) close
Oct 09 09:34:49 compute-1 focused_chebyshev[7658]: {}
Oct 09 09:34:49 compute-1 systemd[1]: libpod-1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d.scope: Deactivated successfully.
Oct 09 09:34:49 compute-1 podman[7642]: 2025-10-09 09:34:49.94224911 +0000 UTC m=+0.594701739 container died 1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:34:49 compute-1 systemd[1]: var-lib-containers-storage-overlay-0a7fa6941b84fe7290ca329d5608d3714a6831e3fb0c027ac0511962e67f88e7-merged.mount: Deactivated successfully.
Oct 09 09:34:49 compute-1 podman[7642]: 2025-10-09 09:34:49.963175882 +0000 UTC m=+0.615628512 container remove 1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:34:49 compute-1 systemd[1]: libpod-conmon-1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d.scope: Deactivated successfully.
Oct 09 09:34:49 compute-1 sudo[7551]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:50 compute-1 sudo[7754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:34:50 compute-1 sudo[7754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:50 compute-1 sudo[7754]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) close
Oct 09 09:34:50 compute-1 sudo[7779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:50 compute-1 sudo[7779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:50 compute-1 sudo[7779]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:50 compute-1 sudo[7805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:34:50 compute-1 sudo[7805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:50 compute-1 ceph-osd[7514]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct 09 09:34:50 compute-1 ceph-osd[7514]: load: jerasure load: lrc 
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 09 09:34:50 compute-1 podman[7891]: 2025-10-09 09:34:50.606731452 +0000 UTC m=+0.039603475 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 09 09:34:50 compute-1 podman[7891]: 2025-10-09 09:34:50.681931385 +0000 UTC m=+0.114803397 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 09 09:34:50 compute-1 sudo[7805]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:50 compute-1 sudo[7937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:50 compute-1 sudo[7937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:50 compute-1 sudo[7937]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:50 compute-1 sudo[7962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -- inventory --format=json-pretty --filter-for-batch
Oct 09 09:34:50 compute-1 sudo[7962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:50 compute-1 ceph-osd[7514]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 09 09:34:50 compute-1 ceph-osd[7514]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 09 09:34:51 compute-1 podman[8028]: 2025-10-09 09:34:51.1322869 +0000 UTC m=+0.026035595 container create ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 09 09:34:51 compute-1 systemd[1]: Started libpod-conmon-ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d.scope.
Oct 09 09:34:51 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:34:51 compute-1 podman[8028]: 2025-10-09 09:34:51.183912769 +0000 UTC m=+0.077661464 container init ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcnulty, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:34:51 compute-1 podman[8028]: 2025-10-09 09:34:51.188732445 +0000 UTC m=+0.082481140 container start ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 09 09:34:51 compute-1 podman[8028]: 2025-10-09 09:34:51.189752459 +0000 UTC m=+0.083501154 container attach ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcnulty, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 09 09:34:51 compute-1 inspiring_mcnulty[8040]: 167 167
Oct 09 09:34:51 compute-1 systemd[1]: libpod-ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d.scope: Deactivated successfully.
Oct 09 09:34:51 compute-1 conmon[8040]: conmon ae3b8848b5cafb38fd59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d.scope/container/memory.events
Oct 09 09:34:51 compute-1 podman[8028]: 2025-10-09 09:34:51.192199332 +0000 UTC m=+0.085948037 container died ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcnulty, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:34:51 compute-1 systemd[1]: var-lib-containers-storage-overlay-aa8e0e53c9586cba66bc90c8215558cf6fbd4dc0eaf9fd50967f425d020ddaaf-merged.mount: Deactivated successfully.
Oct 09 09:34:51 compute-1 podman[8028]: 2025-10-09 09:34:51.211460003 +0000 UTC m=+0.105208699 container remove ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 09 09:34:51 compute-1 podman[8028]: 2025-10-09 09:34:51.121325896 +0000 UTC m=+0.015074601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:51 compute-1 systemd[1]: libpod-conmon-ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d.scope: Deactivated successfully.
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a079000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a079000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a079000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs mount
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs mount shared_bdev_used = 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: RocksDB version: 7.9.2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Git sha 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: DB SUMMARY
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: DB Session ID:  UVT2D1S8UT2VTLEPFV4T
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: CURRENT file:  CURRENT
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: IDENTITY file:  IDENTITY
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.error_if_exists: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.create_if_missing: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.paranoid_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                     Options.env: 0x560c9a049dc0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                Options.info_log: 0x560c9a04d7a0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_file_opening_threads: 16
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                              Options.statistics: (nil)
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.use_fsync: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.max_log_file_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.allow_fallocate: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.use_direct_reads: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.create_missing_column_families: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                              Options.db_log_dir: 
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                 Options.wal_dir: db.wal
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.advise_random_on_open: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.write_buffer_manager: 0x560c9a144a00
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                            Options.rate_limiter: (nil)
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.unordered_write: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.row_cache: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                              Options.wal_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.allow_ingest_behind: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.two_write_queues: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.manual_wal_flush: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.wal_compression: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.atomic_flush: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.log_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.allow_data_in_errors: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.db_host_id: __hostname__
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_background_jobs: 4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_background_compactions: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_subcompactions: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.max_open_files: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.bytes_per_sync: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.max_background_flushes: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Compression algorithms supported:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kZSTD supported: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kXpressCompression supported: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kBZip2Compression supported: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kLZ4Compression supported: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kZlibCompression supported: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kLZ4HCCompression supported: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kSnappyCompression supported: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db60)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c99273350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db60)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c99273350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db60)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c99273350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db60)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c99273350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db60)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c99273350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db60)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c99273350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db60)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c99273350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db80)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c992729b0
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db80)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c992729b0
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db80)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c992729b0
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ade99e4d-7871-44b8-bb7f-d40708f63a2b
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002491315992, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002491316139, "job": 1, "event": "recovery_finished"}
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: freelist init
Oct 09 09:34:51 compute-1 ceph-osd[7514]: freelist _read_cfg
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs umount
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a079000 /var/lib/ceph/osd/ceph-0/block) close
Oct 09 09:34:51 compute-1 podman[8074]: 2025-10-09 09:34:51.33540254 +0000 UTC m=+0.029316260 container create fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:34:51 compute-1 systemd[1]: Started libpod-conmon-fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed.scope.
Oct 09 09:34:51 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:34:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec0050036ae73e5b5c58f108c45f80369655d5abe1ad663e9569be2a03b6779/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec0050036ae73e5b5c58f108c45f80369655d5abe1ad663e9569be2a03b6779/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec0050036ae73e5b5c58f108c45f80369655d5abe1ad663e9569be2a03b6779/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec0050036ae73e5b5c58f108c45f80369655d5abe1ad663e9569be2a03b6779/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:34:51 compute-1 podman[8074]: 2025-10-09 09:34:51.392469897 +0000 UTC m=+0.086383627 container init fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 09 09:34:51 compute-1 podman[8074]: 2025-10-09 09:34:51.397578829 +0000 UTC m=+0.091492548 container start fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 09 09:34:51 compute-1 podman[8074]: 2025-10-09 09:34:51.398565489 +0000 UTC m=+0.092479208 container attach fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 09 09:34:51 compute-1 podman[8074]: 2025-10-09 09:34:51.324333742 +0000 UTC m=+0.018247482 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a079000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a079000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a079000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs mount
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluefs mount shared_bdev_used = 4718592
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: RocksDB version: 7.9.2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Git sha 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: DB SUMMARY
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: DB Session ID:  UVT2D1S8UT2VTLEPFV4S
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: CURRENT file:  CURRENT
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: IDENTITY file:  IDENTITY
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.error_if_exists: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.create_if_missing: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.paranoid_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                     Options.env: 0x560c9a1e82a0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                Options.info_log: 0x560c9a04d920
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_file_opening_threads: 16
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                              Options.statistics: (nil)
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.use_fsync: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.max_log_file_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.allow_fallocate: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.use_direct_reads: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.create_missing_column_families: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                              Options.db_log_dir: 
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                 Options.wal_dir: db.wal
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.advise_random_on_open: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.write_buffer_manager: 0x560c9a144a00
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                            Options.rate_limiter: (nil)
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.unordered_write: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.row_cache: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                              Options.wal_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.allow_ingest_behind: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.two_write_queues: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.manual_wal_flush: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.wal_compression: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.atomic_flush: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.log_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.allow_data_in_errors: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.db_host_id: __hostname__
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_background_jobs: 4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_background_compactions: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_subcompactions: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.max_open_files: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.bytes_per_sync: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.max_background_flushes: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Compression algorithms supported:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kZSTD supported: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kXpressCompression supported: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kBZip2Compression supported: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kLZ4Compression supported: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kZlibCompression supported: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kLZ4HCCompression supported: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         kSnappyCompression supported: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04d680)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c99273350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04d680)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c99273350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04d680)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c99273350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04d680)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c99273350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04d680)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c99273350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04d680)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c99273350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04d680)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c99273350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04dac0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c992729b0
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04dac0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c992729b0
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04dac0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x560c992729b0
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ade99e4d-7871-44b8-bb7f-d40708f63a2b
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002491585542, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002491591229, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002491, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ade99e4d-7871-44b8-bb7f-d40708f63a2b", "db_session_id": "UVT2D1S8UT2VTLEPFV4S", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002491592329, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1599, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 473, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002491, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ade99e4d-7871-44b8-bb7f-d40708f63a2b", "db_session_id": "UVT2D1S8UT2VTLEPFV4S", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002491593379, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002491, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ade99e4d-7871-44b8-bb7f-d40708f63a2b", "db_session_id": "UVT2D1S8UT2VTLEPFV4S", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002491593827, "job": 1, "event": "recovery_finished"}
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560c9a214000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: DB pointer 0x560c9a1f4000
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct 09 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                          Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
                                          
                                          ** Compaction Stats [m-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-0] **
                                          
                                          ** Compaction Stats [m-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-1] **
                                          
                                          ** Compaction Stats [m-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-2] **
                                          
                                          ** Compaction Stats [p-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-0] **
                                          
                                          ** Compaction Stats [p-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-1] **
                                          
                                          ** Compaction Stats [p-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-2] **
                                          
                                          ** Compaction Stats [O-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-0] **
                                          
                                          ** Compaction Stats [O-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-1] **
                                          
                                          ** Compaction Stats [O-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-2] **
                                          
                                          ** Compaction Stats [L] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [L] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [L] **
                                          
                                          ** Compaction Stats [P] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [P] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [P] **
Oct 09 09:34:51 compute-1 ceph-osd[7514]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 09 09:34:51 compute-1 ceph-osd[7514]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 09 09:34:51 compute-1 ceph-osd[7514]: _get_class not permitted to load lua
Oct 09 09:34:51 compute-1 ceph-osd[7514]: _get_class not permitted to load sdk
Oct 09 09:34:51 compute-1 ceph-osd[7514]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 09 09:34:51 compute-1 ceph-osd[7514]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 09 09:34:51 compute-1 ceph-osd[7514]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 09 09:34:51 compute-1 ceph-osd[7514]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 09 09:34:51 compute-1 ceph-osd[7514]: osd.0 0 load_pgs
Oct 09 09:34:51 compute-1 ceph-osd[7514]: osd.0 0 load_pgs opened 0 pgs
Oct 09 09:34:51 compute-1 ceph-osd[7514]: osd.0 0 log_to_monitors true
Oct 09 09:34:51 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0[7510]: 2025-10-09T09:34:51.608+0000 7f027f093740 -1 osd.0 0 log_to_monitors true
Oct 09 09:34:51 compute-1 youthful_kirch[8267]: [
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:     {
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:         "available": false,
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:         "being_replaced": false,
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:         "ceph_device_lvm": false,
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:         "lsm_data": {},
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:         "lvs": [],
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:         "path": "/dev/sr0",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:         "rejected_reasons": [
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "Insufficient space (<5GB)",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "Has a FileSystem"
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:         ],
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:         "sys_api": {
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "actuators": null,
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "device_nodes": [
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:                 "sr0"
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             ],
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "devname": "sr0",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "human_readable_size": "474.00 KB",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "id_bus": "ata",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "model": "QEMU DVD-ROM",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "nr_requests": "64",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "parent": "/dev/sr0",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "partitions": {},
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "path": "/dev/sr0",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "removable": "1",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "rev": "2.5+",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "ro": "0",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "rotational": "0",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "sas_address": "",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "sas_device_handle": "",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "scheduler_mode": "mq-deadline",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "sectors": 0,
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "sectorsize": "2048",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "size": 485376.0,
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "support_discard": "2048",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "type": "disk",
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:             "vendor": "QEMU"
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:         }
Oct 09 09:34:51 compute-1 youthful_kirch[8267]:     }
Oct 09 09:34:51 compute-1 youthful_kirch[8267]: ]
Oct 09 09:34:51 compute-1 systemd[1]: libpod-fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed.scope: Deactivated successfully.
Oct 09 09:34:51 compute-1 conmon[8267]: conmon fdfcc004bc91a1ffeeff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed.scope/container/memory.events
Oct 09 09:34:51 compute-1 podman[8074]: 2025-10-09 09:34:51.827497454 +0000 UTC m=+0.521411175 container died fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:34:51 compute-1 systemd[1]: var-lib-containers-storage-overlay-4ec0050036ae73e5b5c58f108c45f80369655d5abe1ad663e9569be2a03b6779-merged.mount: Deactivated successfully.
Oct 09 09:34:51 compute-1 podman[8074]: 2025-10-09 09:34:51.850446399 +0000 UTC m=+0.544360120 container remove fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:34:51 compute-1 systemd[1]: libpod-conmon-fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed.scope: Deactivated successfully.
Oct 09 09:34:51 compute-1 sudo[7962]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:52 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 09 09:34:52 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 09 09:34:53 compute-1 ceph-osd[7514]: osd.0 0 done with init, starting boot process
Oct 09 09:34:53 compute-1 ceph-osd[7514]: osd.0 0 start_boot
Oct 09 09:34:53 compute-1 ceph-osd[7514]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 09 09:34:53 compute-1 ceph-osd[7514]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 09 09:34:53 compute-1 ceph-osd[7514]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 09 09:34:53 compute-1 ceph-osd[7514]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 09 09:34:53 compute-1 ceph-osd[7514]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct 09 09:34:54 compute-1 ceph-osd[7514]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 44.690 iops: 11440.698 elapsed_sec: 0.262
Oct 09 09:34:54 compute-1 ceph-osd[7514]: log_channel(cluster) log [WRN] : OSD bench result of 11440.697696 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 09 09:34:54 compute-1 ceph-osd[7514]: osd.0 0 waiting for initial osdmap
Oct 09 09:34:54 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0[7510]: 2025-10-09T09:34:54.308+0000 7f027b016640 -1 osd.0 0 waiting for initial osdmap
Oct 09 09:34:54 compute-1 ceph-osd[7514]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 09 09:34:54 compute-1 ceph-osd[7514]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 09 09:34:54 compute-1 ceph-osd[7514]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 09 09:34:54 compute-1 ceph-osd[7514]: osd.0 8 check_osdmap_features require_osd_release unknown -> squid
Oct 09 09:34:54 compute-1 ceph-osd[7514]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 09 09:34:54 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0[7510]: 2025-10-09T09:34:54.334+0000 7f027663e640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 09 09:34:54 compute-1 ceph-osd[7514]: osd.0 8 set_numa_affinity not setting numa affinity
Oct 09 09:34:54 compute-1 ceph-osd[7514]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Oct 09 09:34:54 compute-1 ceph-osd[7514]: osd.0 9 state: booting -> active
Oct 09 09:34:54 compute-1 ceph-osd[7514]: osd.0 9 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 09 09:34:54 compute-1 ceph-osd[7514]: osd.0 9 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 09 09:34:54 compute-1 ceph-osd[7514]: osd.0 9 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 09 09:34:54 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 9 pg[1.0( empty local-lis/les=0/0 n=0 ec=9/9 lis/c=0/0 les/c/f=0/0/0 sis=9) [0] r=0 lpr=9 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:34:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 10 pg[2.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:34:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=9/10 n=0 ec=9/9 lis/c=0/0 les/c/f=0/0/0 sis=9) [0] r=0 lpr=9 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:34:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 11 pg[2.0( empty local-lis/les=10/11 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 15 pg[7.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 16 pg[2.0( empty local-lis/les=10/11 n=0 ec=10/10 lis/c=10/10 les/c/f=11/11/0 sis=16 pruub=10.971442223s) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active pruub 20.894191742s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 16 pg[7.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 16 pg[2.0( empty local-lis/les=10/11 n=0 ec=10/10 lis/c=10/10 les/c/f=11/11/0 sis=16 pruub=10.971442223s) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown pruub 20.894191742s@ mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1f( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1d( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1c( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1e( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1b( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.a( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.9( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.8( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.7( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.6( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.4( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.2( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.5( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.3( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.b( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.c( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.d( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.e( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.f( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.10( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.11( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.12( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.13( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.14( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.15( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.16( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.17( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.18( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.19( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1a( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.a( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.7( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.9( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.6( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.4( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1e( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.5( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.0( empty local-lis/les=16/17 n=0 ec=10/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.3( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.2( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.8( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.e( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.10( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.11( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.12( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.14( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.13( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.15( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.16( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.17( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.18( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.19( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1a( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:35:02 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct 09 09:35:02 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct 09 09:35:03 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct 09 09:35:03 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct 09 09:35:04 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct 09 09:35:04 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct 09 09:35:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 09 09:35:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 09 09:35:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.6 deep-scrub starts
Oct 09 09:35:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.6 deep-scrub ok
Oct 09 09:35:07 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct 09 09:35:07 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940311432s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.937086105s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940385818s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.937189102s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940285683s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.937086105s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1e( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944421768s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941232681s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940368652s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.937189102s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1e( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944404602s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941232681s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.a( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940256119s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.937124252s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.a( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940241814s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.937124252s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.9( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940279007s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.937196732s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.9( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940266609s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.937196732s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.6( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940261841s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.937210083s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.6( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940230370s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.937210083s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.4( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940224648s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.937221527s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.4( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940214157s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.937221527s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944583893s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941667557s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944221497s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941305161s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944576263s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941667557s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944540977s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941642761s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944530487s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941642761s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.e( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944205284s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941354752s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.e( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944196701s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941354752s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.10( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944177628s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941366196s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.10( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944170952s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941366196s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.13( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944331169s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941562653s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.13( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944323540s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941562653s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.15( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944312096s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941577911s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.15( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944303513s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941577911s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.19( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944275856s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941610336s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.19( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944268227s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941610336s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944208145s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941305161s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:35:08 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct 09 09:35:08 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct 09 09:35:09 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct 09 09:35:09 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct 09 09:35:10 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct 09 09:35:10 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct 09 09:35:11 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.5 deep-scrub starts
Oct 09 09:35:11 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.5 deep-scrub ok
Oct 09 09:35:12 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Oct 09 09:35:12 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Oct 09 09:35:13 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct 09 09:35:13 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct 09 09:35:14 compute-1 sudo[9501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:14 compute-1 sudo[9501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:14 compute-1 sudo[9501]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:14 compute-1 sudo[9526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:14 compute-1 sudo[9526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:14 compute-1 systemd[1268]: Starting Mark boot as successful...
Oct 09 09:35:14 compute-1 systemd[1268]: Finished Mark boot as successful.
Oct 09 09:35:14 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct 09 09:35:14 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct 09 09:35:14 compute-1 podman[9586]: 2025-10-09 09:35:14.722855854 +0000 UTC m=+0.023787858 container create 8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 09 09:35:14 compute-1 systemd[1]: Started libpod-conmon-8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26.scope.
Oct 09 09:35:14 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:35:14 compute-1 podman[9586]: 2025-10-09 09:35:14.779591416 +0000 UTC m=+0.080523441 container init 8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_gauss, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:35:14 compute-1 podman[9586]: 2025-10-09 09:35:14.783745228 +0000 UTC m=+0.084677222 container start 8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_gauss, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:35:14 compute-1 keen_gauss[9599]: 167 167
Oct 09 09:35:14 compute-1 systemd[1]: libpod-8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26.scope: Deactivated successfully.
Oct 09 09:35:14 compute-1 podman[9586]: 2025-10-09 09:35:14.787191224 +0000 UTC m=+0.088123238 container attach 8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_gauss, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:35:14 compute-1 podman[9586]: 2025-10-09 09:35:14.788083196 +0000 UTC m=+0.089015190 container died 8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 09 09:35:14 compute-1 systemd[1]: var-lib-containers-storage-overlay-b4fc8d1e1f4e46a0ccfa57819bb46e04067713be6e09287591b3805bb535613e-merged.mount: Deactivated successfully.
Oct 09 09:35:14 compute-1 podman[9586]: 2025-10-09 09:35:14.80888345 +0000 UTC m=+0.109815434 container remove 8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:35:14 compute-1 podman[9586]: 2025-10-09 09:35:14.712726699 +0000 UTC m=+0.013658713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:14 compute-1 systemd[1]: libpod-conmon-8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26.scope: Deactivated successfully.
Oct 09 09:35:14 compute-1 podman[9613]: 2025-10-09 09:35:14.853003388 +0000 UTC m=+0.031096847 container create 76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_kilby, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:35:14 compute-1 systemd[1]: Started libpod-conmon-76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d.scope.
Oct 09 09:35:14 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:35:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e211ccb6c49ec1e466084a10ec21dfa556a017e8f7a4c43c6ebfc877477f711/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e211ccb6c49ec1e466084a10ec21dfa556a017e8f7a4c43c6ebfc877477f711/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e211ccb6c49ec1e466084a10ec21dfa556a017e8f7a4c43c6ebfc877477f711/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e211ccb6c49ec1e466084a10ec21dfa556a017e8f7a4c43c6ebfc877477f711/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:14 compute-1 podman[9613]: 2025-10-09 09:35:14.907820343 +0000 UTC m=+0.085913801 container init 76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 09 09:35:14 compute-1 podman[9613]: 2025-10-09 09:35:14.911634484 +0000 UTC m=+0.089727942 container start 76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_kilby, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 09 09:35:14 compute-1 podman[9613]: 2025-10-09 09:35:14.912615343 +0000 UTC m=+0.090708801 container attach 76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 09 09:35:14 compute-1 podman[9613]: 2025-10-09 09:35:14.8366405 +0000 UTC m=+0.014733978 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:14 compute-1 systemd[1]: libpod-76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d.scope: Deactivated successfully.
Oct 09 09:35:14 compute-1 podman[9613]: 2025-10-09 09:35:14.95088407 +0000 UTC m=+0.128977528 container died 76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:35:14 compute-1 systemd[1]: var-lib-containers-storage-overlay-6e211ccb6c49ec1e466084a10ec21dfa556a017e8f7a4c43c6ebfc877477f711-merged.mount: Deactivated successfully.
Oct 09 09:35:14 compute-1 podman[9613]: 2025-10-09 09:35:14.967116624 +0000 UTC m=+0.145210082 container remove 76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:35:14 compute-1 systemd[1]: libpod-conmon-76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d.scope: Deactivated successfully.
Oct 09 09:35:14 compute-1 systemd[1]: Reloading.
Oct 09 09:35:15 compute-1 systemd-rc-local-generator[9685]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:15 compute-1 systemd-sysv-generator[9688]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:15 compute-1 systemd[1]: Reloading.
Oct 09 09:35:15 compute-1 systemd-rc-local-generator[9725]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:15 compute-1 systemd-sysv-generator[9728]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:15 compute-1 systemd[1]: Starting Ceph mon.compute-1 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:35:15 compute-1 podman[9779]: 2025-10-09 09:35:15.504576114 +0000 UTC m=+0.025394608 container create e3c4abd37c3ede7431f896d3dc6226c8674cda33134f769dc780272f31a2cc63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:35:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b7968448a1334a01368ec30e24351dca9a9e498f66aa3977a5fed5ff6cb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b7968448a1334a01368ec30e24351dca9a9e498f66aa3977a5fed5ff6cb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b7968448a1334a01368ec30e24351dca9a9e498f66aa3977a5fed5ff6cb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b7968448a1334a01368ec30e24351dca9a9e498f66aa3977a5fed5ff6cb6/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:15 compute-1 podman[9779]: 2025-10-09 09:35:15.549099773 +0000 UTC m=+0.069918267 container init e3c4abd37c3ede7431f896d3dc6226c8674cda33134f769dc780272f31a2cc63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:35:15 compute-1 podman[9779]: 2025-10-09 09:35:15.553065839 +0000 UTC m=+0.073884324 container start e3c4abd37c3ede7431f896d3dc6226c8674cda33134f769dc780272f31a2cc63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-1, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:35:15 compute-1 bash[9779]: e3c4abd37c3ede7431f896d3dc6226c8674cda33134f769dc780272f31a2cc63
Oct 09 09:35:15 compute-1 podman[9779]: 2025-10-09 09:35:15.494476773 +0000 UTC m=+0.015295267 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:15 compute-1 systemd[1]: Started Ceph mon.compute-1 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:35:15 compute-1 ceph-mon[9795]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 09:35:15 compute-1 ceph-mon[9795]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct 09 09:35:15 compute-1 ceph-mon[9795]: pidfile_write: ignore empty --pid-file
Oct 09 09:35:15 compute-1 ceph-mon[9795]: load: jerasure load: lrc 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: RocksDB version: 7.9.2
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Git sha 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: DB SUMMARY
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: DB Session ID:  M9CZJU0HKVV71NP1SGV8
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: CURRENT file:  CURRENT
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: IDENTITY file:  IDENTITY
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                         Options.error_if_exists: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                       Options.create_if_missing: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                         Options.paranoid_checks: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                                     Options.env: 0x55e4b3b9dc20
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                                Options.info_log: 0x55e4b559fa20
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.max_file_opening_threads: 16
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                              Options.statistics: (nil)
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                               Options.use_fsync: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                       Options.max_log_file_size: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                         Options.allow_fallocate: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                        Options.use_direct_reads: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:          Options.create_missing_column_families: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                              Options.db_log_dir: 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                                 Options.wal_dir: 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.advise_random_on_open: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                    Options.write_buffer_manager: 0x55e4b55a3900
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                            Options.rate_limiter: (nil)
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.unordered_write: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                               Options.row_cache: None
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                              Options.wal_filter: None
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.allow_ingest_behind: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.two_write_queues: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.manual_wal_flush: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.wal_compression: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.atomic_flush: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.log_readahead_size: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.allow_data_in_errors: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.db_host_id: __hostname__
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.max_background_jobs: 2
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.max_background_compactions: -1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.max_subcompactions: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.max_total_wal_size: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                          Options.max_open_files: -1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                          Options.bytes_per_sync: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:       Options.compaction_readahead_size: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.max_background_flushes: -1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Compression algorithms supported:
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         kZSTD supported: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         kXpressCompression supported: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         kBZip2Compression supported: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         kLZ4Compression supported: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         kZlibCompression supported: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         kLZ4HCCompression supported: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         kSnappyCompression supported: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:           Options.merge_operator: 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:        Options.compaction_filter: None
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e4b559f6a0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x55e4b55c29b0
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:        Options.write_buffer_size: 33554432
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:  Options.max_write_buffer_number: 2
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:          Options.compression: NoCompression
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.num_levels: 7
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:35:15 compute-1 sudo[9526]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 94a5d839-0858-4e7b-94a4-0a54b15338db
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002515584213, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002515585012, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002515585086, "job": 1, "event": "recovery_finished"}
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e4b55c4e00
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: DB pointer 0x55e4b55d4000
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                          Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x55e4b55c29b0#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.9e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
Oct 09 09:35:15 compute-1 ceph-mon[9795]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Oct 09 09:35:15 compute-1 ceph-mon[9795]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(???) e0 preinit fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:15 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).mds e1 new map
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).mds e1 print_map
                                          e1
                                          btime 2025-10-09T09:33:39:705322+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: -1
                                           
                                          No filesystems configured
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 1 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e23 crush map has features 3314933000852226048, adjusting msgr requires
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e23 crush map has features 288514051259236352, adjusting msgr requires
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e23 crush map has features 288514051259236352, adjusting msgr requires
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e23 crush map has features 288514051259236352, adjusting msgr requires
Oct 09 09:35:15 compute-1 ceph-mon[9795]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e6: 2 total, 0 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Adjusting osd_memory_target on compute-1 to  5248M
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Adjusting osd_memory_target on compute-0 to 128.5M
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Unable to set osd_memory_target on compute-0 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e7: 2 total, 0 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891] boot
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e8: 2 total, 1 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: purged_snaps scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: purged_snaps scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 09:35:15 compute-1 ceph-mon[9795]: OSD bench result of 25996.309425 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/854922803' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: purged_snaps scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: purged_snaps scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: OSD bench result of 11440.697696 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284] boot
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e9: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3807816729' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: pgmap v31: 1 pgs: 1 unknown; 0 B data, 122 MiB used, 20 GiB / 20 GiB avail
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3807816729' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e10: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1972273422' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1972273422' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e11: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mgrmap e9: compute-0.lwqgfy(active, since 60s)
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4109488378' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: pgmap v34: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 148 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4109488378' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e12: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2120229509' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2120229509' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e13: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1793952825' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: pgmap v37: 5 pgs: 1 active+clean, 2 unknown, 2 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1793952825' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e14: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/395083493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/395083493' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e15: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2631429048' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: pgmap v40: 7 pgs: 3 active+clean, 3 unknown, 1 creating+peering; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2631429048' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e16: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/992561200' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/992561200' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e17: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.1d scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.1d scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1830712947' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: pgmap v43: 38 pgs: 3 active+clean, 34 unknown, 1 creating+peering; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1830712947' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e18: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.1f scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.1f scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3454543203' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3454543203' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e19: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.1b scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.1b scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/602017510' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: pgmap v46: 38 pgs: 6 active+clean, 32 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/602017510' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e20: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2594759833' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.9 scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.9 scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2594759833' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e21: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.6 deep-scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.6 deep-scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: pgmap v49: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e22: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.1c scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.1c scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: pgmap v51: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3549201441' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3549201441' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.8 scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.8 scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: osdmap e23: 2 total, 2 up, 2 in
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.d deep-scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.d deep-scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3070980083' entity='client.admin' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.14223 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Saving service ingress.rgw.default spec with placement count:2
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.7 scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.7 scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.c scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.c scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.2 scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: pgmap v53: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.2 scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.conf
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.1e scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.1e scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.5 deep-scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.5 deep-scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.14225 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Saving service node-exporter spec with placement *
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Saving service grafana spec with placement compute-0;count:1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Saving service prometheus spec with placement compute-0;count:1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Saving service alertmanager spec with placement compute-0;count:1
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: pgmap v54: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Deploying daemon mon.compute-2 on compute-2
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2266537364' entity='client.admin' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.a scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.a scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.0 scrub starts
Oct 09 09:35:15 compute-1 ceph-mon[9795]: 2.0 scrub ok
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3921635866' entity='client.admin' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 09 09:35:15 compute-1 ceph-mon[9795]: Cluster is now healthy
Oct 09 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4272592449' entity='client.admin' 
Oct 09 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Oct 09 09:35:15 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.f deep-scrub ok
Oct 09 09:35:16 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.11 deep-scrub starts
Oct 09 09:35:16 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.11 deep-scrub ok
Oct 09 09:35:17 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Oct 09 09:35:17 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Oct 09 09:35:18 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct 09 09:35:18 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct 09 09:35:19 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct 09 09:35:19 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct 09 09:35:20 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct 09 09:35:20 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct 09 09:35:21 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Oct 09 09:35:21 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Oct 09 09:35:21 compute-1 ceph-mon[9795]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Oct 09 09:35:21 compute-1 ceph-mon[9795]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Oct 09 09:35:21 compute-1 ceph-mon[9795]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 09 09:35:21 compute-1 ceph-mon[9795]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 09:35:22 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct 09 09:35:22 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 09:35:24 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 09 09:35:24 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.4 scrub starts
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.4 scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.3 scrub starts
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.3 scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: pgmap v55: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:24 compute-1 ceph-mon[9795]: Deploying daemon mon.compute-1 on compute-1
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: mon.compute-0 calling monitor election
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.1 scrub starts
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.1 scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.b scrub starts
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.b scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.10 scrub starts
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.10 scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.f deep-scrub starts
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.f deep-scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: pgmap v56: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: mon.compute-2 calling monitor election
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.e scrub starts
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.e scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.11 deep-scrub starts
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.11 deep-scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.15 scrub starts
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.15 scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.12 deep-scrub starts
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.12 deep-scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: pgmap v57: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 09 09:35:24 compute-1 ceph-mon[9795]: monmap epoch 2
Oct 09 09:35:24 compute-1 ceph-mon[9795]: fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:24 compute-1 ceph-mon[9795]: last_changed 2025-10-09T09:35:14.415832+0000
Oct 09 09:35:24 compute-1 ceph-mon[9795]: created 2025-10-09T09:33:38.201593+0000
Oct 09 09:35:24 compute-1 ceph-mon[9795]: min_mon_release 19 (squid)
Oct 09 09:35:24 compute-1 ceph-mon[9795]: election_strategy: 1
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 09 09:35:24 compute-1 ceph-mon[9795]: fsmap 
Oct 09 09:35:24 compute-1 ceph-mon[9795]: osdmap e23: 2 total, 2 up, 2 in
Oct 09 09:35:24 compute-1 ceph-mon[9795]: mgrmap e9: compute-0.lwqgfy(active, since 83s)
Oct 09 09:35:24 compute-1 ceph-mon[9795]: overall HEALTH_OK
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 09:35:24 compute-1 ceph-mon[9795]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:04:00.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865152,os=Linux}
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.13 scrub starts
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.13 scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: mon.compute-0 calling monitor election
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: mon.compute-2 calling monitor election
Oct 09 09:35:24 compute-1 ceph-mon[9795]: pgmap v58: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.17 scrub starts
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.17 scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.18 scrub starts
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.18 scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: mon.compute-1 calling monitor election
Oct 09 09:35:24 compute-1 ceph-mon[9795]: pgmap v59: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.1a scrub starts
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2.1a scrub ok
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: pgmap v60: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 09 09:35:24 compute-1 ceph-mon[9795]: monmap epoch 3
Oct 09 09:35:24 compute-1 ceph-mon[9795]: fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:24 compute-1 ceph-mon[9795]: last_changed 2025-10-09T09:35:19.619597+0000
Oct 09 09:35:24 compute-1 ceph-mon[9795]: created 2025-10-09T09:33:38.201593+0000
Oct 09 09:35:24 compute-1 ceph-mon[9795]: min_mon_release 19 (squid)
Oct 09 09:35:24 compute-1 ceph-mon[9795]: election_strategy: 1
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 09 09:35:24 compute-1 ceph-mon[9795]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Oct 09 09:35:24 compute-1 ceph-mon[9795]: fsmap 
Oct 09 09:35:24 compute-1 ceph-mon[9795]: osdmap e23: 2 total, 2 up, 2 in
Oct 09 09:35:24 compute-1 ceph-mon[9795]: mgrmap e9: compute-0.lwqgfy(active, since 88s)
Oct 09 09:35:24 compute-1 ceph-mon[9795]: overall HEALTH_OK
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.etokpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.etokpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:24 compute-1 sudo[9834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:24 compute-1 sudo[9834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:24 compute-1 sudo[9834]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:24 compute-1 sudo[9859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:24 compute-1 sudo[9859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:25 compute-1 podman[9918]: 2025-10-09 09:35:25.058670306 +0000 UTC m=+0.027135038 container create 54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:35:25 compute-1 systemd[1]: Started libpod-conmon-54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96.scope.
Oct 09 09:35:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Oct 09 09:35:25 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:35:25 compute-1 podman[9918]: 2025-10-09 09:35:25.118532702 +0000 UTC m=+0.086997424 container init 54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 09 09:35:25 compute-1 podman[9918]: 2025-10-09 09:35:25.124459005 +0000 UTC m=+0.092923728 container start 54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 09 09:35:25 compute-1 podman[9918]: 2025-10-09 09:35:25.125494367 +0000 UTC m=+0.093959089 container attach 54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:35:25 compute-1 crazy_wright[9932]: 167 167
Oct 09 09:35:25 compute-1 systemd[1]: libpod-54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96.scope: Deactivated successfully.
Oct 09 09:35:25 compute-1 conmon[9932]: conmon 54909601e7864ba18209 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96.scope/container/memory.events
Oct 09 09:35:25 compute-1 podman[9918]: 2025-10-09 09:35:25.047216512 +0000 UTC m=+0.015681254 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:25 compute-1 podman[9937]: 2025-10-09 09:35:25.161531758 +0000 UTC m=+0.020449274 container died 54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wright, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 09 09:35:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-c9fc58fbf18e0c9da5c2f48c9bebdb4ab964b8b24621bdb98b7776ba9001661c-merged.mount: Deactivated successfully.
Oct 09 09:35:25 compute-1 podman[9937]: 2025-10-09 09:35:25.180144968 +0000 UTC m=+0.039062465 container remove 54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:35:25 compute-1 systemd[1]: libpod-conmon-54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96.scope: Deactivated successfully.
Oct 09 09:35:25 compute-1 systemd[1]: Reloading.
Oct 09 09:35:25 compute-1 systemd-sysv-generator[9973]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:25 compute-1 systemd-rc-local-generator[9970]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:25 compute-1 sudo[10008]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqzxaqhljiqhijiojnbwgqnjfqmwlxyl ; /usr/bin/python3'
Oct 09 09:35:25 compute-1 sudo[10008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:35:25 compute-1 systemd[1]: Reloading.
Oct 09 09:35:25 compute-1 systemd-sysv-generator[10038]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:25 compute-1 systemd-rc-local-generator[10035]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:25 compute-1 python3[10012]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:35:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e23 _set_new_cache_sizes cache_size:1019937216 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:35:25 compute-1 sudo[10008]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:25 compute-1 systemd[1]: Starting Ceph mgr.compute-1.etokpp for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:35:25 compute-1 podman[10100]: 2025-10-09 09:35:25.850762809 +0000 UTC m=+0.028731478 container create d27f3e957991263543395e3774a5a0d39a40a8e12d215b4fd0d84b8e79139206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 09 09:35:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7ddd87911cf4d42632d5a95b4fb7601b3eb4efb6b31c90ea451bf643e3236c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7ddd87911cf4d42632d5a95b4fb7601b3eb4efb6b31c90ea451bf643e3236c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7ddd87911cf4d42632d5a95b4fb7601b3eb4efb6b31c90ea451bf643e3236c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7ddd87911cf4d42632d5a95b4fb7601b3eb4efb6b31c90ea451bf643e3236c/merged/var/lib/ceph/mgr/ceph-compute-1.etokpp supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:25 compute-1 podman[10100]: 2025-10-09 09:35:25.901122441 +0000 UTC m=+0.079091130 container init d27f3e957991263543395e3774a5a0d39a40a8e12d215b4fd0d84b8e79139206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:35:25 compute-1 podman[10100]: 2025-10-09 09:35:25.905621383 +0000 UTC m=+0.083590052 container start d27f3e957991263543395e3774a5a0d39a40a8e12d215b4fd0d84b8e79139206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:35:25 compute-1 bash[10100]: d27f3e957991263543395e3774a5a0d39a40a8e12d215b4fd0d84b8e79139206
Oct 09 09:35:25 compute-1 podman[10100]: 2025-10-09 09:35:25.838722099 +0000 UTC m=+0.016690788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:25 compute-1 systemd[1]: Started Ceph mgr.compute-1.etokpp for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:35:25 compute-1 ceph-mgr[10116]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 09:35:25 compute-1 ceph-mgr[10116]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 09:35:25 compute-1 ceph-mgr[10116]: pidfile_write: ignore empty --pid-file
Oct 09 09:35:25 compute-1 sudo[9859]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:25 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'alerts'
Oct 09 09:35:26 compute-1 ceph-mgr[10116]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:26 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:26.054+0000 7f5bb9ecc140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:26 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'balancer'
Oct 09 09:35:26 compute-1 ceph-mgr[10116]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:26 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:26.125+0000 7f5bb9ecc140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:26 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'cephadm'
Oct 09 09:35:26 compute-1 ceph-mon[9795]: Deploying daemon mgr.compute-1.etokpp on compute-1
Oct 09 09:35:26 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3098806995' entity='client.admin' 
Oct 09 09:35:26 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:26 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:26 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:26 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:26 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:26 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 09 09:35:26 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 09 09:35:26 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:26 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'crash'
Oct 09 09:35:26 compute-1 ceph-mgr[10116]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:26 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:26.817+0000 7f5bb9ecc140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:26 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'dashboard'
Oct 09 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'devicehealth'
Oct 09 09:35:27 compute-1 ceph-mon[9795]: Deploying daemon crash.compute-2 on compute-2
Oct 09 09:35:27 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2874472706' entity='client.admin' 
Oct 09 09:35:27 compute-1 ceph-mon[9795]: pgmap v61: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:27.366+0000 7f5bb9ecc140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 09:35:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 09:35:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 09:35:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]:   from numpy import show_config as show_numpy_config
Oct 09 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:27.512+0000 7f5bb9ecc140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'influx'
Oct 09 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:27.576+0000 7f5bb9ecc140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'insights'
Oct 09 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'iostat'
Oct 09 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:27.697+0000 7f5bb9ecc140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'k8sevents'
Oct 09 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'localpool'
Oct 09 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mirroring'
Oct 09 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'nfs'
Oct 09 09:35:28 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e24 e24: 3 total, 2 up, 3 in
Oct 09 09:35:28 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3618703096' entity='client.admin' 
Oct 09 09:35:28 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1996078233' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 09 09:35:28 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2413203245' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f"}]: dispatch
Oct 09 09:35:28 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2413203245' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f"}]': finished
Oct 09 09:35:28 compute-1 ceph-mon[9795]: osdmap e24: 3 total, 2 up, 3 in
Oct 09 09:35:28 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:28.578+0000 7f5bb9ecc140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'orchestrator'
Oct 09 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:28.768+0000 7f5bb9ecc140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:28.835+0000 7f5bb9ecc140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_support'
Oct 09 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:28.893+0000 7f5bb9ecc140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:28.963+0000 7f5bb9ecc140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'progress'
Oct 09 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:29.025+0000 7f5bb9ecc140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'prometheus'
Oct 09 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:29.324+0000 7f5bb9ecc140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rbd_support'
Oct 09 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:29.409+0000 7f5bb9ecc140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'restful'
Oct 09 09:35:29 compute-1 ceph-mon[9795]: pgmap v62: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:29 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1996078233' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 09 09:35:29 compute-1 ceph-mon[9795]: mgrmap e10: compute-0.lwqgfy(active, since 92s)
Oct 09 09:35:29 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/954261656' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 09 09:35:29 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/70415478' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 09 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rgw'
Oct 09 09:35:29 compute-1 sshd-session[3017]: Connection closed by 192.168.122.100 port 38672
Oct 09 09:35:29 compute-1 sshd-session[2785]: Connection closed by 192.168.122.100 port 38594
Oct 09 09:35:29 compute-1 sshd-session[3073]: Connection closed by 192.168.122.100 port 38698
Oct 09 09:35:29 compute-1 sshd-session[3044]: Connection closed by 192.168.122.100 port 38686
Oct 09 09:35:29 compute-1 sshd-session[2781]: Connection closed by 192.168.122.100 port 38588
Oct 09 09:35:29 compute-1 sshd-session[2930]: Connection closed by 192.168.122.100 port 38640
Oct 09 09:35:29 compute-1 sshd-session[2959]: Connection closed by 192.168.122.100 port 38650
Oct 09 09:35:29 compute-1 sshd-session[2872]: Connection closed by 192.168.122.100 port 38624
Oct 09 09:35:29 compute-1 sshd-session[2901]: Connection closed by 192.168.122.100 port 38628
Oct 09 09:35:29 compute-1 sshd-session[2843]: Connection closed by 192.168.122.100 port 38614
Oct 09 09:35:29 compute-1 sshd-session[2814]: Connection closed by 192.168.122.100 port 38608
Oct 09 09:35:29 compute-1 sshd-session[2988]: Connection closed by 192.168.122.100 port 38662
Oct 09 09:35:29 compute-1 sshd-session[2811]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-1 systemd[1]: session-7.scope: Deactivated successfully.
Oct 09 09:35:29 compute-1 sshd-session[3070]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-1 systemd[1]: session-16.scope: Deactivated successfully.
Oct 09 09:35:29 compute-1 sshd-session[2898]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-1 systemd[1]: session-16.scope: Consumed 44.024s CPU time.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Session 7 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-1 systemd[1]: session-10.scope: Deactivated successfully.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Session 16 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Session 10 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-1 sshd-session[3041]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-1 systemd[1]: session-15.scope: Deactivated successfully.
Oct 09 09:35:29 compute-1 sshd-session[3014]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-1 systemd[1]: session-14.scope: Deactivated successfully.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Session 15 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Session 14 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-1 sshd-session[2927]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-1 systemd[1]: session-11.scope: Deactivated successfully.
Oct 09 09:35:29 compute-1 sshd-session[2840]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-1 systemd-logind[798]: Session 11 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-1 sshd-session[2869]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-1 systemd[1]: session-8.scope: Deactivated successfully.
Oct 09 09:35:29 compute-1 systemd[1]: session-9.scope: Deactivated successfully.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Removed session 7.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Session 8 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Session 9 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-1 sshd-session[2782]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-1 systemd[1]: session-6.scope: Deactivated successfully.
Oct 09 09:35:29 compute-1 sshd-session[2762]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-1 systemd[1]: session-4.scope: Deactivated successfully.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Session 6 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Session 4 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-1 sshd-session[2956]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-1 systemd[1]: session-12.scope: Deactivated successfully.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Session 12 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Removed session 16.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Removed session 10.
Oct 09 09:35:29 compute-1 sshd-session[2985]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-1 systemd-logind[798]: Removed session 15.
Oct 09 09:35:29 compute-1 systemd[1]: session-13.scope: Deactivated successfully.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Removed session 14.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Session 13 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Removed session 11.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Removed session 8.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Removed session 9.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Removed session 6.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Removed session 4.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Removed session 12.
Oct 09 09:35:29 compute-1 systemd-logind[798]: Removed session 13.
Oct 09 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:29.812+0000 7f5bb9ecc140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rook'
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:30.304+0000 7f5bb9ecc140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'selftest'
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:30.368+0000 7f5bb9ecc140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'snap_schedule'
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:30.438+0000 7f5bb9ecc140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'stats'
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'status'
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:30.570+0000 7f5bb9ecc140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telegraf'
Oct 09 09:35:30 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/70415478' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 09 09:35:30 compute-1 ceph-mon[9795]: mgrmap e11: compute-0.lwqgfy(active, since 93s)
Oct 09 09:35:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e24 _set_new_cache_sizes cache_size:1020053218 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:30.633+0000 7f5bb9ecc140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telemetry'
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:30.768+0000 7f5bb9ecc140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:30.964+0000 7f5bb9ecc140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'volumes'
Oct 09 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:31.196+0000 7f5bb9ecc140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'zabbix'
Oct 09 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:31.258+0000 7f5bb9ecc140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:31 compute-1 ceph-mgr[10116]: ms_deliver_dispatch: unhandled message 0x560697008d00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct 09 09:35:31 compute-1 ceph-mgr[10116]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 09 09:35:31 compute-1 ceph-mgr[10116]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 09 09:35:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setuser ceph since I am not root
Oct 09 09:35:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setgroup ceph since I am not root
Oct 09 09:35:31 compute-1 ceph-mgr[10116]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 09:35:31 compute-1 ceph-mgr[10116]: pidfile_write: ignore empty --pid-file
Oct 09 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'alerts'
Oct 09 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:31.430+0000 7f6ea8e3c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'balancer'
Oct 09 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:31.501+0000 7f6ea8e3c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'cephadm'
Oct 09 09:35:31 compute-1 ceph-mon[9795]: Standby manager daemon compute-2.takdnm started
Oct 09 09:35:31 compute-1 ceph-mon[9795]: Standby manager daemon compute-1.etokpp started
Oct 09 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'crash'
Oct 09 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:32.175+0000 7f6ea8e3c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'dashboard'
Oct 09 09:35:32 compute-1 ceph-mon[9795]: mgrmap e12: compute-0.lwqgfy(active, since 95s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'devicehealth'
Oct 09 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 09:35:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:32.720+0000 7f6ea8e3c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 09:35:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 09:35:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]:   from numpy import show_config as show_numpy_config
Oct 09 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:32.861+0000 7f6ea8e3c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'influx'
Oct 09 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'insights'
Oct 09 09:35:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:32.923+0000 7f6ea8e3c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'iostat'
Oct 09 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'k8sevents'
Oct 09 09:35:33 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:33.043+0000 7f6ea8e3c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'localpool'
Oct 09 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mirroring'
Oct 09 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'nfs'
Oct 09 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:33.947+0000 7f6ea8e3c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'orchestrator'
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:34.135+0000 7f6ea8e3c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:34.201+0000 7f6ea8e3c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_support'
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:34.258+0000 7f6ea8e3c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:34.327+0000 7f6ea8e3c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'progress'
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:34.388+0000 7f6ea8e3c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'prometheus'
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:34.688+0000 7f6ea8e3c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rbd_support'
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:34.772+0000 7f6ea8e3c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'restful'
Oct 09 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rgw'
Oct 09 09:35:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e25 e25: 3 total, 2 up, 3 in
Oct 09 09:35:35 compute-1 ceph-mon[9795]: Active manager daemon compute-0.lwqgfy restarted
Oct 09 09:35:35 compute-1 ceph-mon[9795]: Activating manager daemon compute-0.lwqgfy
Oct 09 09:35:35 compute-1 ceph-mon[9795]: osdmap e25: 3 total, 2 up, 3 in
Oct 09 09:35:35 compute-1 ceph-mon[9795]: mgrmap e13: compute-0.lwqgfy(active, starting, since 0.015525s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:35 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 09:35:35 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:35 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:35 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"}]: dispatch
Oct 09 09:35:35 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-2.takdnm", "id": "compute-2.takdnm"}]: dispatch
Oct 09 09:35:35 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-1.etokpp", "id": "compute-1.etokpp"}]: dispatch
Oct 09 09:35:35 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:35 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 09:35:35 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:35:35 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 09 09:35:35 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 09:35:35 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 09 09:35:35 compute-1 ceph-mon[9795]: Manager daemon compute-0.lwqgfy is now available
Oct 09 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:35.158+0000 7f6ea8e3c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rook'
Oct 09 09:35:35 compute-1 sshd-session[10179]: Accepted publickey for ceph-admin from 192.168.122.100 port 55190 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:35:35 compute-1 systemd-logind[798]: New session 17 of user ceph-admin.
Oct 09 09:35:35 compute-1 systemd[1]: Started Session 17 of User ceph-admin.
Oct 09 09:35:35 compute-1 sshd-session[10179]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:35:35 compute-1 sudo[10183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:35 compute-1 sudo[10183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:35 compute-1 sudo[10183]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e25 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:35:35 compute-1 sudo[10208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:35:35 compute-1 sudo[10208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'selftest'
Oct 09 09:35:35 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:35.643+0000 7f6ea8e3c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:35.705+0000 7f6ea8e3c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'snap_schedule'
Oct 09 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:35.775+0000 7f6ea8e3c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'stats'
Oct 09 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'status'
Oct 09 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telegraf'
Oct 09 09:35:35 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:35.902+0000 7f6ea8e3c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:35.963+0000 7f6ea8e3c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telemetry'
Oct 09 09:35:35 compute-1 podman[10289]: 2025-10-09 09:35:35.9963 +0000 UTC m=+0.038307060 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 09 09:35:36 compute-1 podman[10289]: 2025-10-09 09:35:36.078954245 +0000 UTC m=+0.120961284 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 09:35:36 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct 09 09:35:36 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct 09 09:35:36 compute-1 ceph-mon[9795]: mgrmap e14: compute-0.lwqgfy(active, since 1.02391s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:36 compute-1 ceph-mon[9795]: Standby manager daemon compute-2.takdnm restarted
Oct 09 09:35:36 compute-1 ceph-mon[9795]: Standby manager daemon compute-2.takdnm started
Oct 09 09:35:36 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:36 compute-1 ceph-mgr[10116]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:36 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 09:35:36 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:36.099+0000 7f6ea8e3c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:36 compute-1 sudo[10208]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:36 compute-1 ceph-mgr[10116]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:36 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:36.289+0000 7f6ea8e3c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:36 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'volumes'
Oct 09 09:35:36 compute-1 sudo[10358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:36 compute-1 sudo[10358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:36 compute-1 sudo[10358]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:36 compute-1 sudo[10383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:35:36 compute-1 sudo[10383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:36 compute-1 ceph-mgr[10116]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:36 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'zabbix'
Oct 09 09:35:36 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:36.519+0000 7f6ea8e3c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:36 compute-1 ceph-mgr[10116]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:36 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:36.580+0000 7f6ea8e3c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:36 compute-1 ceph-mgr[10116]: ms_deliver_dispatch: unhandled message 0x564aa4d00d00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct 09 09:35:36 compute-1 ceph-mgr[10116]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 09 09:35:36 compute-1 ceph-mgr[10116]: mgr load Constructed class from module: dashboard
Oct 09 09:35:36 compute-1 ceph-mgr[10116]: [dashboard INFO root] server: ssl=no host=:: port=8443
Oct 09 09:35:36 compute-1 ceph-mgr[10116]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 09 09:35:36 compute-1 ceph-mgr[10116]: [dashboard INFO root] Starting engine...
Oct 09 09:35:36 compute-1 ceph-mgr[10116]: [dashboard INFO root] Engine started...
Oct 09 09:35:36 compute-1 sudo[10383]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:36 compute-1 sudo[10449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:36 compute-1 sudo[10449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:36 compute-1 sudo[10449]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:36 compute-1 sudo[10474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 09 09:35:36 compute-1 sudo[10474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10474]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:35:37 compute-1 sudo[10515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10515]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:35:37 compute-1 sudo[10540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10540]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='client.14292 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:37 compute-1 ceph-mon[9795]: pgmap v3: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-1 ceph-mon[9795]: Standby manager daemon compute-1.etokpp restarted
Oct 09 09:35:37 compute-1 ceph-mon[9795]: Standby manager daemon compute-1.etokpp started
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-1 ceph-mon[9795]: mgrmap e15: compute-0.lwqgfy(active, since 2s), standbys: compute-1.etokpp, compute-2.takdnm
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:35:37 compute-1 sudo[10565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:37 compute-1 sudo[10565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10565]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:37 compute-1 sudo[10590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10590]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:37 compute-1 sudo[10615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10615]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:37 compute-1 sudo[10663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10663]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:37 compute-1 sudo[10688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10688]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 09 09:35:37 compute-1 sudo[10713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10713]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:37 compute-1 sudo[10738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10738]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:37 compute-1 sudo[10763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10763]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:37 compute-1 sudo[10788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10788]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:37 compute-1 sudo[10813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10813]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:37 compute-1 sudo[10838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10838]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:37 compute-1 sudo[10886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10886]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:37 compute-1 sudo[10911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10911]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:37 compute-1 sudo[10936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10936]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-1 sudo[10961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:35:37 compute-1 sudo[10961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-1 sudo[10961]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 sudo[10986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:35:38 compute-1 sudo[10986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[10986]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 sudo[11011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-1 sudo[11011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[11011]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 sudo[11036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:38 compute-1 sudo[11036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[11036]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 sudo[11061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-1 sudo[11061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[11061]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 sudo[11109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-1 sudo[11109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[11109]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 sudo[11134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-1 sudo[11134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[11134]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 sudo[11159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:38 compute-1 sudo[11159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[11159]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 sudo[11184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:38 compute-1 sudo[11184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[11184]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 sudo[11209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:38 compute-1 sudo[11209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[11209]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 sudo[11234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-1 sudo[11234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[11234]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 sudo[11259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:38 compute-1 sudo[11259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[11259]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:36] ENGINE Bus STARTING
Oct 09 09:35:38 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:36] ENGINE Serving on http://192.168.122.100:8765
Oct 09 09:35:38 compute-1 ceph-mon[9795]: pgmap v4: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:38 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:37] ENGINE Serving on https://192.168.122.100:7150
Oct 09 09:35:38 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:37] ENGINE Client ('192.168.122.100', 44370) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 09 09:35:38 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:37] ENGINE Bus STARTED
Oct 09 09:35:38 compute-1 ceph-mon[9795]: Adjusting osd_memory_target on compute-0 to 128.5M
Oct 09 09:35:38 compute-1 ceph-mon[9795]: Adjusting osd_memory_target on compute-1 to 128.5M
Oct 09 09:35:38 compute-1 ceph-mon[9795]: Unable to set osd_memory_target on compute-0 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct 09 09:35:38 compute-1 ceph-mon[9795]: Unable to set osd_memory_target on compute-1 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct 09 09:35:38 compute-1 ceph-mon[9795]: Updating compute-0:/etc/ceph/ceph.conf
Oct 09 09:35:38 compute-1 ceph-mon[9795]: Updating compute-1:/etc/ceph/ceph.conf
Oct 09 09:35:38 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.conf
Oct 09 09:35:38 compute-1 ceph-mon[9795]: from='client.14328 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:38 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:38 compute-1 ceph-mon[9795]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:38 compute-1 ceph-mon[9795]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:38 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:38 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:38 compute-1 sudo[11284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-1 sudo[11284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[11284]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 sudo[11332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-1 sudo[11332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[11332]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 sudo[11357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-1 sudo[11357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[11357]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-1 sudo[11382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:38 compute-1 sudo[11382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-1 sudo[11382]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:39 compute-1 ceph-mon[9795]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:39 compute-1 ceph-mon[9795]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:39 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:39 compute-1 ceph-mon[9795]: from='client.14334 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:39 compute-1 ceph-mon[9795]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:39 compute-1 ceph-mon[9795]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:39 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  1: '-n'
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  2: 'mgr.compute-1.etokpp'
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  3: '-f'
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  4: '--setuser'
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  5: 'ceph'
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  6: '--setgroup'
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  7: 'ceph'
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  8: '--default-log-to-file=false'
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  9: '--default-log-to-journald=true'
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 09 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  exe_path /proc/self/exe
Oct 09 09:35:40 compute-1 sshd-session[10182]: Connection closed by 192.168.122.100 port 55190
Oct 09 09:35:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setuser ceph since I am not root
Oct 09 09:35:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setgroup ceph since I am not root
Oct 09 09:35:40 compute-1 sshd-session[10179]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:40 compute-1 systemd[1]: session-17.scope: Deactivated successfully.
Oct 09 09:35:40 compute-1 systemd[1]: session-17.scope: Consumed 2.993s CPU time.
Oct 09 09:35:40 compute-1 systemd-logind[798]: Session 17 logged out. Waiting for processes to exit.
Oct 09 09:35:40 compute-1 ceph-mgr[10116]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 09:35:40 compute-1 systemd-logind[798]: Removed session 17.
Oct 09 09:35:40 compute-1 ceph-mgr[10116]: pidfile_write: ignore empty --pid-file
Oct 09 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'alerts'
Oct 09 09:35:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:40.163+0000 7f393bbf7140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'balancer'
Oct 09 09:35:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:40.234+0000 7f393bbf7140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'cephadm'
Oct 09 09:35:40 compute-1 ceph-mon[9795]: from='client.14340 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:40 compute-1 ceph-mon[9795]: Deploying daemon node-exporter.compute-0 on compute-0
Oct 09 09:35:40 compute-1 ceph-mon[9795]: pgmap v5: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:40 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/536206930' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 09 09:35:40 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/536206930' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 09 09:35:40 compute-1 ceph-mon[9795]: mgrmap e16: compute-0.lwqgfy(active, since 4s), standbys: compute-1.etokpp, compute-2.takdnm
Oct 09 09:35:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'crash'
Oct 09 09:35:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:40.940+0000 7f393bbf7140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'dashboard'
Oct 09 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'devicehealth'
Oct 09 09:35:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:41.484+0000 7f393bbf7140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 09:35:41 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1543803184' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 09 09:35:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 09:35:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 09:35:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]:   from numpy import show_config as show_numpy_config
Oct 09 09:35:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:41.625+0000 7f393bbf7140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'influx'
Oct 09 09:35:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:41.687+0000 7f393bbf7140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'insights'
Oct 09 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'iostat'
Oct 09 09:35:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:41.807+0000 7f393bbf7140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'k8sevents'
Oct 09 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'localpool'
Oct 09 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mirroring'
Oct 09 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'nfs'
Oct 09 09:35:42 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1543803184' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 09 09:35:42 compute-1 ceph-mon[9795]: mgrmap e17: compute-0.lwqgfy(active, since 6s), standbys: compute-1.etokpp, compute-2.takdnm
Oct 09 09:35:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:42.652+0000 7f393bbf7140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'orchestrator'
Oct 09 09:35:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:42.839+0000 7f393bbf7140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 09:35:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:42.910+0000 7f393bbf7140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_support'
Oct 09 09:35:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:42.971+0000 7f393bbf7140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 09:35:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:43.039+0000 7f393bbf7140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'progress'
Oct 09 09:35:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:43.102+0000 7f393bbf7140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'prometheus'
Oct 09 09:35:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:43.399+0000 7f393bbf7140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rbd_support'
Oct 09 09:35:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:43.484+0000 7f393bbf7140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'restful'
Oct 09 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rgw'
Oct 09 09:35:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:43.865+0000 7f393bbf7140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rook'
Oct 09 09:35:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:44.346+0000 7f393bbf7140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'selftest'
Oct 09 09:35:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:44.407+0000 7f393bbf7140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'snap_schedule'
Oct 09 09:35:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:44.478+0000 7f393bbf7140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'stats'
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'status'
Oct 09 09:35:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:44.607+0000 7f393bbf7140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telegraf'
Oct 09 09:35:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:44.670+0000 7f393bbf7140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telemetry'
Oct 09 09:35:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:44.805+0000 7f393bbf7140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 09:35:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:44.997+0000 7f393bbf7140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'volumes'
Oct 09 09:35:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:45.225+0000 7f393bbf7140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'zabbix'
Oct 09 09:35:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:45.285+0000 7f393bbf7140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-1 ceph-mgr[10116]: ms_deliver_dispatch: unhandled message 0x5558da4fd860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 09 09:35:45 compute-1 ceph-mgr[10116]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 09 09:35:45 compute-1 ceph-mgr[10116]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 09 09:35:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setuser ceph since I am not root
Oct 09 09:35:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setgroup ceph since I am not root
Oct 09 09:35:45 compute-1 ceph-mon[9795]: Standby manager daemon compute-1.etokpp restarted
Oct 09 09:35:45 compute-1 ceph-mon[9795]: Standby manager daemon compute-1.etokpp started
Oct 09 09:35:45 compute-1 ceph-mgr[10116]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 09:35:45 compute-1 ceph-mgr[10116]: pidfile_write: ignore empty --pid-file
Oct 09 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'alerts'
Oct 09 09:35:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:45.462+0000 7f4c12e9d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'balancer'
Oct 09 09:35:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e26 e26: 3 total, 2 up, 3 in
Oct 09 09:35:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:45.536+0000 7f4c12e9d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'cephadm'
Oct 09 09:35:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'crash'
Oct 09 09:35:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:46.191+0000 7f4c12e9d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'dashboard'
Oct 09 09:35:46 compute-1 ceph-mon[9795]: mgrmap e18: compute-0.lwqgfy(active, since 10s), standbys: compute-1.etokpp, compute-2.takdnm
Oct 09 09:35:46 compute-1 ceph-mon[9795]: Standby manager daemon compute-2.takdnm restarted
Oct 09 09:35:46 compute-1 ceph-mon[9795]: Standby manager daemon compute-2.takdnm started
Oct 09 09:35:46 compute-1 ceph-mon[9795]: Active manager daemon compute-0.lwqgfy restarted
Oct 09 09:35:46 compute-1 ceph-mon[9795]: Activating manager daemon compute-0.lwqgfy
Oct 09 09:35:46 compute-1 ceph-mon[9795]: osdmap e26: 3 total, 2 up, 3 in
Oct 09 09:35:46 compute-1 ceph-mon[9795]: mgrmap e19: compute-0.lwqgfy(active, starting, since 0.0135589s), standbys: compute-1.etokpp, compute-2.takdnm
Oct 09 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'devicehealth'
Oct 09 09:35:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:46.725+0000 7f4c12e9d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 09:35:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 09:35:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 09:35:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]:   from numpy import show_config as show_numpy_config
Oct 09 09:35:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:46.863+0000 7f4c12e9d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'influx'
Oct 09 09:35:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:46.924+0000 7f4c12e9d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'insights'
Oct 09 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'iostat'
Oct 09 09:35:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:47.042+0000 7f4c12e9d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'k8sevents'
Oct 09 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'localpool'
Oct 09 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mirroring'
Oct 09 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'nfs'
Oct 09 09:35:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:47.884+0000 7f4c12e9d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'orchestrator'
Oct 09 09:35:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:48.070+0000 7f4c12e9d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 09:35:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:48.135+0000 7f4c12e9d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_support'
Oct 09 09:35:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:48.193+0000 7f4c12e9d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 09:35:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:48.260+0000 7f4c12e9d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'progress'
Oct 09 09:35:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:48.321+0000 7f4c12e9d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'prometheus'
Oct 09 09:35:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:48.614+0000 7f4c12e9d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rbd_support'
Oct 09 09:35:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:48.697+0000 7f4c12e9d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'restful'
Oct 09 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rgw'
Oct 09 09:35:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:49.070+0000 7f4c12e9d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rook'
Oct 09 09:35:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:49.543+0000 7f4c12e9d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'selftest'
Oct 09 09:35:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:49.604+0000 7f4c12e9d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'snap_schedule'
Oct 09 09:35:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:49.673+0000 7f4c12e9d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'stats'
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'status'
Oct 09 09:35:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:49.801+0000 7f4c12e9d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telegraf'
Oct 09 09:35:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:49.862+0000 7f4c12e9d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telemetry'
Oct 09 09:35:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:49.995+0000 7f4c12e9d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 09:35:50 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:50.184+0000 7f4c12e9d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-1 ceph-mgr[10116]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'volumes'
Oct 09 09:35:50 compute-1 systemd[1]: Stopping User Manager for UID 42477...
Oct 09 09:35:50 compute-1 systemd[2766]: Activating special unit Exit the Session...
Oct 09 09:35:50 compute-1 systemd[2766]: Stopped target Main User Target.
Oct 09 09:35:50 compute-1 systemd[2766]: Stopped target Basic System.
Oct 09 09:35:50 compute-1 systemd[2766]: Stopped target Paths.
Oct 09 09:35:50 compute-1 systemd[2766]: Stopped target Sockets.
Oct 09 09:35:50 compute-1 systemd[2766]: Stopped target Timers.
Oct 09 09:35:50 compute-1 systemd[2766]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 09 09:35:50 compute-1 systemd[2766]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 09:35:50 compute-1 systemd[2766]: Closed D-Bus User Message Bus Socket.
Oct 09 09:35:50 compute-1 systemd[2766]: Stopped Create User's Volatile Files and Directories.
Oct 09 09:35:50 compute-1 systemd[2766]: Removed slice User Application Slice.
Oct 09 09:35:50 compute-1 systemd[2766]: Reached target Shutdown.
Oct 09 09:35:50 compute-1 systemd[2766]: Finished Exit the Session.
Oct 09 09:35:50 compute-1 systemd[2766]: Reached target Exit the Session.
Oct 09 09:35:50 compute-1 systemd[1]: user@42477.service: Deactivated successfully.
Oct 09 09:35:50 compute-1 systemd[1]: Stopped User Manager for UID 42477.
Oct 09 09:35:50 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 09 09:35:50 compute-1 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 09 09:35:50 compute-1 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 09 09:35:50 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 09 09:35:50 compute-1 systemd[1]: Removed slice User Slice of UID 42477.
Oct 09 09:35:50 compute-1 systemd[1]: user-42477.slice: Consumed 47.740s CPU time.
Oct 09 09:35:50 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:50.415+0000 7f4c12e9d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-1 ceph-mgr[10116]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'zabbix'
Oct 09 09:35:50 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:50.475+0000 7f4c12e9d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-1 ceph-mgr[10116]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-1 ceph-mgr[10116]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 09 09:35:50 compute-1 ceph-mgr[10116]: mgr load Constructed class from module: dashboard
Oct 09 09:35:50 compute-1 ceph-mgr[10116]: [dashboard INFO root] server: ssl=no host=:: port=8443
Oct 09 09:35:50 compute-1 ceph-mgr[10116]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 09 09:35:50 compute-1 ceph-mgr[10116]: [dashboard INFO root] Starting engine...
Oct 09 09:35:50 compute-1 ceph-mgr[10116]: ms_deliver_dispatch: unhandled message 0x556a8c7cb860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct 09 09:35:50 compute-1 ceph-mon[9795]: Standby manager daemon compute-1.etokpp restarted
Oct 09 09:35:50 compute-1 ceph-mon[9795]: Standby manager daemon compute-1.etokpp started
Oct 09 09:35:50 compute-1 ceph-mgr[10116]: [dashboard INFO root] Engine started...
Oct 09 09:35:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:35:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e27 e27: 3 total, 2 up, 3 in
Oct 09 09:35:51 compute-1 sshd-session[11482]: Accepted publickey for ceph-admin from 192.168.122.100 port 39476 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:35:51 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Oct 09 09:35:51 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 09 09:35:51 compute-1 systemd-logind[798]: New session 18 of user ceph-admin.
Oct 09 09:35:51 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 09 09:35:51 compute-1 systemd[1]: Starting User Manager for UID 42477...
Oct 09 09:35:51 compute-1 systemd[11486]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:35:51 compute-1 systemd[11486]: Queued start job for default target Main User Target.
Oct 09 09:35:51 compute-1 systemd[11486]: Created slice User Application Slice.
Oct 09 09:35:51 compute-1 systemd[11486]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 09:35:51 compute-1 systemd[11486]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 09:35:51 compute-1 systemd[11486]: Reached target Paths.
Oct 09 09:35:51 compute-1 systemd[11486]: Reached target Timers.
Oct 09 09:35:51 compute-1 systemd[11486]: Starting D-Bus User Message Bus Socket...
Oct 09 09:35:51 compute-1 systemd[11486]: Starting Create User's Volatile Files and Directories...
Oct 09 09:35:51 compute-1 systemd[11486]: Listening on D-Bus User Message Bus Socket.
Oct 09 09:35:51 compute-1 systemd[11486]: Finished Create User's Volatile Files and Directories.
Oct 09 09:35:51 compute-1 systemd[11486]: Reached target Sockets.
Oct 09 09:35:51 compute-1 systemd[11486]: Reached target Basic System.
Oct 09 09:35:51 compute-1 systemd[11486]: Reached target Main User Target.
Oct 09 09:35:51 compute-1 systemd[11486]: Startup finished in 87ms.
Oct 09 09:35:51 compute-1 systemd[1]: Started User Manager for UID 42477.
Oct 09 09:35:51 compute-1 systemd[1]: Started Session 18 of User ceph-admin.
Oct 09 09:35:51 compute-1 sshd-session[11482]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:35:51 compute-1 sudo[11502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:51 compute-1 sudo[11502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:51 compute-1 sudo[11502]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:51 compute-1 sudo[11527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:35:51 compute-1 sudo[11527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:51 compute-1 ceph-mon[9795]: mgrmap e20: compute-0.lwqgfy(active, starting, since 5s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:51 compute-1 ceph-mon[9795]: Active manager daemon compute-0.lwqgfy restarted
Oct 09 09:35:51 compute-1 ceph-mon[9795]: Activating manager daemon compute-0.lwqgfy
Oct 09 09:35:51 compute-1 ceph-mon[9795]: osdmap e27: 3 total, 2 up, 3 in
Oct 09 09:35:51 compute-1 ceph-mon[9795]: mgrmap e21: compute-0.lwqgfy(active, starting, since 0.0130272s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"}]: dispatch
Oct 09 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-2.takdnm", "id": "compute-2.takdnm"}]: dispatch
Oct 09 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-1.etokpp", "id": "compute-1.etokpp"}]: dispatch
Oct 09 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 09 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 09 09:35:51 compute-1 ceph-mon[9795]: Manager daemon compute-0.lwqgfy is now available
Oct 09 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct 09 09:35:51 compute-1 ceph-mon[9795]: Standby manager daemon compute-2.takdnm restarted
Oct 09 09:35:51 compute-1 ceph-mon[9795]: Standby manager daemon compute-2.takdnm started
Oct 09 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct 09 09:35:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e2 new map
Oct 09 09:35:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e2 print_map
                                          e2
                                          btime 2025-10-09T09:35:51:790448+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        2
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:35:51.790428+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        
                                          up        {}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        0
                                          qdb_cluster        leader: 0 members: 
                                           
                                           
Oct 09 09:35:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e28 e28: 3 total, 2 up, 3 in
Oct 09 09:35:51 compute-1 podman[11606]: 2025-10-09 09:35:51.810457209 +0000 UTC m=+0.039848377 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 09 09:35:51 compute-1 podman[11606]: 2025-10-09 09:35:51.889970301 +0000 UTC m=+0.119361459 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 09 09:35:52 compute-1 sudo[11527]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:52 compute-1 sudo[11676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:52 compute-1 sudo[11676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:52 compute-1 sudo[11676]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:52 compute-1 sudo[11701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:35:52 compute-1 sudo[11701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:52 compute-1 sudo[11701]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:52 compute-1 sudo[11755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:52 compute-1 sudo[11755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:52 compute-1 sudo[11755]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:52 compute-1 sudo[11780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 09 09:35:52 compute-1 sudo[11780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:52 compute-1 ceph-mon[9795]: mgrmap e22: compute-0.lwqgfy(active, since 1.02912s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 09 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 09 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 09 09:35:52 compute-1 ceph-mon[9795]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 09 09:35:52 compute-1 ceph-mon[9795]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 09 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 09 09:35:52 compute-1 ceph-mon[9795]: osdmap e28: 3 total, 2 up, 3 in
Oct 09 09:35:52 compute-1 ceph-mon[9795]: fsmap cephfs:0
Oct 09 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:35:52 compute-1 ceph-mon[9795]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 09 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:52] ENGINE Bus STARTING
Oct 09 09:35:52 compute-1 ceph-mon[9795]: from='client.24205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:52 compute-1 ceph-mon[9795]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 09 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-1 sudo[11780]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[11821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:35:53 compute-1 sudo[11821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[11821]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[11846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:35:53 compute-1 sudo[11846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[11846]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[11871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:53 compute-1 sudo[11871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[11871]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[11896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:53 compute-1 sudo[11896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[11896]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[11921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:53 compute-1 sudo[11921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[11921]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[11969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:53 compute-1 sudo[11969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[11969]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[11994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:53 compute-1 sudo[11994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[11994]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[12019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 09 09:35:53 compute-1 sudo[12019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[12019]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[12044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:53 compute-1 sudo[12044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[12044]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[12069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:53 compute-1 sudo[12069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[12069]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[12094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:53 compute-1 sudo[12094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[12094]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[12119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:53 compute-1 sudo[12119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[12119]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[12144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:53 compute-1 sudo[12144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[12144]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[12192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:53 compute-1 sudo[12192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[12192]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[12217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:53 compute-1 sudo[12217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[12217]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[12242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:53 compute-1 sudo[12242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[12242]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[12267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:35:53 compute-1 sudo[12267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[12267]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[12292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:35:53 compute-1 sudo[12292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[12292]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[12317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:53 compute-1 sudo[12317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[12317]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 sudo[12342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:53 compute-1 sudo[12342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[12342]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:52] ENGINE Serving on http://192.168.122.100:8765
Oct 09 09:35:53 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:52] ENGINE Serving on https://192.168.122.100:7150
Oct 09 09:35:53 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:52] ENGINE Bus STARTED
Oct 09 09:35:53 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:52] ENGINE Client ('192.168.122.100', 36178) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 09 09:35:53 compute-1 ceph-mon[9795]: pgmap v5: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:53 compute-1 ceph-mon[9795]: Adjusting osd_memory_target on compute-1 to 128.5M
Oct 09 09:35:53 compute-1 ceph-mon[9795]: Unable to set osd_memory_target on compute-1 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct 09 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:35:53 compute-1 ceph-mon[9795]: Updating compute-0:/etc/ceph/ceph.conf
Oct 09 09:35:53 compute-1 ceph-mon[9795]: Updating compute-1:/etc/ceph/ceph.conf
Oct 09 09:35:53 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.conf
Oct 09 09:35:53 compute-1 ceph-mon[9795]: from='client.14418 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct 09 09:35:53 compute-1 ceph-mon[9795]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:53 compute-1 ceph-mon[9795]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:53 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:53 compute-1 ceph-mon[9795]: mgrmap e23: compute-0.lwqgfy(active, since 2s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:53 compute-1 sudo[12367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:53 compute-1 sudo[12367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-1 sudo[12367]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e29 e29: 3 total, 2 up, 3 in
Oct 09 09:35:54 compute-1 sudo[12415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:54 compute-1 sudo[12415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-1 sudo[12415]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-1 sudo[12440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:54 compute-1 sudo[12440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-1 sudo[12440]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-1 sudo[12465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:54 compute-1 sudo[12465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-1 sudo[12465]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-1 sudo[12490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:54 compute-1 sudo[12490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-1 sudo[12490]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-1 sudo[12515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:54 compute-1 sudo[12515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-1 sudo[12515]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-1 sudo[12540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:54 compute-1 sudo[12540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-1 sudo[12540]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-1 sudo[12565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:54 compute-1 sudo[12565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-1 sudo[12565]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-1 sudo[12590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:54 compute-1 sudo[12590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-1 sudo[12590]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-1 sudo[12638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:54 compute-1 sudo[12638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-1 sudo[12638]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-1 sudo[12663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:54 compute-1 sudo[12663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-1 sudo[12663]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-1 sudo[12688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:54 compute-1 sudo[12688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-1 sudo[12688]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-1 sudo[12713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:54 compute-1 sudo[12713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-1 sudo[12713]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-1 sudo[12738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:54 compute-1 sudo[12738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:55 compute-1 ceph-mon[9795]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:55 compute-1 ceph-mon[9795]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:55 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct 09 09:35:55 compute-1 ceph-mon[9795]: osdmap e29: 3 total, 2 up, 3 in
Oct 09 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct 09 09:35:55 compute-1 ceph-mon[9795]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:55 compute-1 ceph-mon[9795]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:55 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e30 e30: 3 total, 2 up, 3 in
Oct 09 09:35:55 compute-1 systemd[1]: Reloading.
Oct 09 09:35:55 compute-1 systemd-sysv-generator[12821]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:55 compute-1 systemd-rc-local-generator[12816]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:55 compute-1 systemd[1]: Reloading.
Oct 09 09:35:55 compute-1 systemd-sysv-generator[12861]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:55 compute-1 systemd-rc-local-generator[12858]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:55 compute-1 systemd[1]: Starting Ceph node-exporter.compute-1 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:35:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:35:55 compute-1 bash[12912]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Oct 09 09:35:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e31 e31: 3 total, 2 up, 3 in
Oct 09 09:35:56 compute-1 ceph-mon[9795]: Deploying daemon node-exporter.compute-1 on compute-1
Oct 09 09:35:56 compute-1 ceph-mon[9795]: pgmap v7: 39 pgs: 1 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:56 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct 09 09:35:56 compute-1 ceph-mon[9795]: osdmap e30: 3 total, 2 up, 3 in
Oct 09 09:35:56 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:35:56 compute-1 ceph-mon[9795]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 09 09:35:56 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:56 compute-1 ceph-mon[9795]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 09 09:35:56 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:56 compute-1 ceph-mon[9795]: mgrmap e24: compute-0.lwqgfy(active, since 4s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:56 compute-1 ceph-mon[9795]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 09 09:35:56 compute-1 bash[12912]: Getting image source signatures
Oct 09 09:35:56 compute-1 bash[12912]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Oct 09 09:35:56 compute-1 bash[12912]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Oct 09 09:35:56 compute-1 bash[12912]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Oct 09 09:35:56 compute-1 bash[12912]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Oct 09 09:35:56 compute-1 bash[12912]: Writing manifest to image destination
Oct 09 09:35:56 compute-1 podman[12912]: 2025-10-09 09:35:56.842351942 +0000 UTC m=+1.213163415 container create 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:35:56 compute-1 podman[12912]: 2025-10-09 09:35:56.833360521 +0000 UTC m=+1.204172015 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct 09 09:35:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac037a0d617511958afad4153ee7390f0013c07eee65864bb14f6c9129d06cfc/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:56 compute-1 podman[12912]: 2025-10-09 09:35:56.871614298 +0000 UTC m=+1.242425793 container init 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:35:56 compute-1 podman[12912]: 2025-10-09 09:35:56.875809206 +0000 UTC m=+1.246620680 container start 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:35:56 compute-1 bash[12912]: 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.879Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.879Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.880Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.880Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.880Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.880Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 09 09:35:56 compute-1 systemd[1]: Started Ceph node-exporter.compute-1 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=arp
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=bcache
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=bonding
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=cpu
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=dmi
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=edac
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=entropy
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=filefd
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=hwmon
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=netclass
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=netdev
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=netstat
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=nfs
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=nvme
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=os
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=pressure
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=rapl
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=selinux
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=softnet
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=stat
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=textfile
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=time
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=uname
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.885Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.885Z caller=node_exporter.go:117 level=info collector=xfs
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.885Z caller=node_exporter.go:117 level=info collector=zfs
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.885Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct 09 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.885Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct 09 09:35:56 compute-1 sudo[12738]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:57 compute-1 ceph-mon[9795]: osdmap e31: 3 total, 2 up, 3 in
Oct 09 09:35:57 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:35:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1480014278' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 09 09:35:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1480014278' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 09 09:35:57 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:57 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:57 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:58 compute-1 ceph-mon[9795]: pgmap v10: 39 pgs: 1 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:58 compute-1 ceph-mon[9795]: Deploying daemon node-exporter.compute-2 on compute-2
Oct 09 09:35:58 compute-1 ceph-mon[9795]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 09 09:35:58 compute-1 ceph-mon[9795]: mgrmap e25: compute-0.lwqgfy(active, since 6s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:58 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1035192713' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 09 09:35:59 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1636592391' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:36:00 compute-1 ceph-mon[9795]: pgmap v11: 39 pgs: 39 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct 09 09:36:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1429686175' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 09 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:01 compute-1 ceph-mon[9795]: pgmap v12: 39 pgs: 39 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s
Oct 09 09:36:01 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:01 compute-1 ceph-mon[9795]: from='client.24245 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 09 09:36:01 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 09 09:36:01 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:01 compute-1 ceph-mon[9795]: Deploying daemon osd.2 on compute-2
Oct 09 09:36:03 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:03 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:04 compute-1 ceph-mon[9795]: from='client.24251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 09 09:36:04 compute-1 ceph-mon[9795]: pgmap v13: 39 pgs: 39 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Oct 09 09:36:04 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:04 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:05 compute-1 ceph-mon[9795]: from='client.24257 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 09 09:36:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:06 compute-1 ceph-mon[9795]: pgmap v14: 39 pgs: 39 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Oct 09 09:36:06 compute-1 ceph-mon[9795]: from='client.14469 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 09 09:36:06 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:06 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:06 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mbbcec", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 09:36:06 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mbbcec", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 09:36:06 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:06 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:06 compute-1 ceph-mon[9795]: Deploying daemon rgw.rgw.compute-2.mbbcec on compute-2
Oct 09 09:36:06 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:06 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2036627890' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 09 09:36:06 compute-1 sudo[12985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:06 compute-1 sudo[12985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:06 compute-1 sudo[12985]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:06 compute-1 sudo[13010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:06 compute-1 sudo[13010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:06 compute-1 podman[13068]: 2025-10-09 09:36:06.862100951 +0000 UTC m=+0.025105341 container create 875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lovelace, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:36:06 compute-1 systemd[1]: Started libpod-conmon-875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4.scope.
Oct 09 09:36:06 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:36:06 compute-1 podman[13068]: 2025-10-09 09:36:06.911725806 +0000 UTC m=+0.074730217 container init 875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:36:06 compute-1 podman[13068]: 2025-10-09 09:36:06.915940502 +0000 UTC m=+0.078944893 container start 875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:36:06 compute-1 podman[13068]: 2025-10-09 09:36:06.916899651 +0000 UTC m=+0.079904041 container attach 875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 09 09:36:06 compute-1 intelligent_lovelace[13082]: 167 167
Oct 09 09:36:06 compute-1 systemd[1]: libpod-875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4.scope: Deactivated successfully.
Oct 09 09:36:06 compute-1 podman[13068]: 2025-10-09 09:36:06.919033053 +0000 UTC m=+0.082037463 container died 875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 09 09:36:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-ffadcff2f4b7c8d7a3ef7ca68e7b70f0e98ca5e82e1b095915efcde60d7d0358-merged.mount: Deactivated successfully.
Oct 09 09:36:06 compute-1 podman[13068]: 2025-10-09 09:36:06.944275622 +0000 UTC m=+0.107280011 container remove 875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:36:06 compute-1 podman[13068]: 2025-10-09 09:36:06.85158725 +0000 UTC m=+0.014591660 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:06 compute-1 systemd[1]: libpod-conmon-875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4.scope: Deactivated successfully.
Oct 09 09:36:06 compute-1 systemd[1]: Reloading.
Oct 09 09:36:07 compute-1 systemd-rc-local-generator[13118]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:07 compute-1 systemd-sysv-generator[13121]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:07 compute-1 systemd[1]: Reloading.
Oct 09 09:36:07 compute-1 systemd-rc-local-generator[13158]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:07 compute-1 systemd-sysv-generator[13161]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:07 compute-1 systemd[1]: Starting Ceph rgw.rgw.compute-1.fxnvnn for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:36:07 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct 09 09:36:07 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2719329378' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 09 09:36:07 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:07 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:07 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:07 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fxnvnn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 09:36:07 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fxnvnn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 09:36:07 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:07 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:07 compute-1 ceph-mon[9795]: Deploying daemon rgw.rgw.compute-1.fxnvnn on compute-1
Oct 09 09:36:07 compute-1 ceph-mon[9795]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 09 09:36:07 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2719329378' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 09 09:36:07 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e32 e32: 3 total, 2 up, 3 in
Oct 09 09:36:07 compute-1 podman[13215]: 2025-10-09 09:36:07.530753894 +0000 UTC m=+0.028193684 container create 43272f5fbc7b06cfa3a5e91acf2ff34586a7679d5fd0d0b2fed02ec9e020a8bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-1-fxnvnn, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 09 09:36:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b892584c111a9eec031e8afd71142ac431487b8bc4c0aa7195955bee310af347/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b892584c111a9eec031e8afd71142ac431487b8bc4c0aa7195955bee310af347/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b892584c111a9eec031e8afd71142ac431487b8bc4c0aa7195955bee310af347/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b892584c111a9eec031e8afd71142ac431487b8bc4c0aa7195955bee310af347/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.fxnvnn supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:07 compute-1 podman[13215]: 2025-10-09 09:36:07.575288172 +0000 UTC m=+0.072727952 container init 43272f5fbc7b06cfa3a5e91acf2ff34586a7679d5fd0d0b2fed02ec9e020a8bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-1-fxnvnn, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 09 09:36:07 compute-1 podman[13215]: 2025-10-09 09:36:07.57935029 +0000 UTC m=+0.076790071 container start 43272f5fbc7b06cfa3a5e91acf2ff34586a7679d5fd0d0b2fed02ec9e020a8bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-1-fxnvnn, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 09 09:36:07 compute-1 bash[13215]: 43272f5fbc7b06cfa3a5e91acf2ff34586a7679d5fd0d0b2fed02ec9e020a8bd
Oct 09 09:36:07 compute-1 podman[13215]: 2025-10-09 09:36:07.51791501 +0000 UTC m=+0.015354810 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:07 compute-1 systemd[1]: Started Ceph rgw.rgw.compute-1.fxnvnn for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:36:07 compute-1 sudo[13010]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:07 compute-1 radosgw[13231]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 09 09:36:07 compute-1 radosgw[13231]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Oct 09 09:36:07 compute-1 radosgw[13231]: framework: beast
Oct 09 09:36:07 compute-1 radosgw[13231]: framework conf key: endpoint, val: 192.168.122.101:8082
Oct 09 09:36:07 compute-1 radosgw[13231]: init_numa not setting numa affinity
Oct 09 09:36:08 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e33 e33: 3 total, 2 up, 3 in
Oct 09 09:36:08 compute-1 ceph-mon[9795]: pgmap v15: 39 pgs: 39 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s
Oct 09 09:36:08 compute-1 ceph-mon[9795]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 09 09:36:08 compute-1 ceph-mon[9795]: osdmap e32: 3 total, 2 up, 3 in
Oct 09 09:36:08 compute-1 ceph-mon[9795]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 09 09:36:08 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:36:08 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/573248088' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 09 09:36:08 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 09 09:36:08 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:08 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:08 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:08 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.yciajn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 09:36:08 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.yciajn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 09:36:08 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:08 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.918492317s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 90.938423157s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.918516159s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 90.938461304s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.918492317s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.938423157s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.918516159s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.938461304s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.5( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.922034264s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 90.942253113s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.5( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.922034264s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942253113s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.922046661s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 90.942306519s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.922046661s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942306519s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.922318459s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 90.942604065s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.922318459s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942604065s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.12( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.921983719s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 90.942306519s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.12( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.921983719s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942306519s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.921921730s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 90.942375183s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.921921730s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942375183s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:36:09 compute-1 ceph-mon[9795]: Deploying daemon rgw.rgw.compute-0.yciajn on compute-0
Oct 09 09:36:09 compute-1 ceph-mon[9795]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct 09 09:36:09 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 09 09:36:09 compute-1 ceph-mon[9795]: osdmap e33: 3 total, 2 up, 3 in
Oct 09 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:36:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3729780142' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 09 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zfggbi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 09 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zfggbi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 09 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e34 e34: 3 total, 3 up, 3 in
Oct 09 09:36:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct 09 09:36:09 compute-1 ceph-mon[9795]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[10.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.1c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.053833961s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.938461304s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.1d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.053790092s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.938423157s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.1c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.053812981s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.938461304s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.1d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.053765297s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.938423157s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.5( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057502747s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942253113s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.5( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057479858s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942253113s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057463646s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942306519s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057728767s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942604065s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057424545s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942306519s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057720184s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942604065s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.12( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057360649s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942306519s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.12( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057330132s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942306519s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.18( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057376862s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942375183s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.18( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057367325s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942375183s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:36:10 compute-1 ceph-mon[9795]: purged_snaps scrub starts
Oct 09 09:36:10 compute-1 ceph-mon[9795]: purged_snaps scrub ok
Oct 09 09:36:10 compute-1 ceph-mon[9795]: pgmap v18: 40 pgs: 1 unknown, 39 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:36:10 compute-1 ceph-mon[9795]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 09 09:36:10 compute-1 ceph-mon[9795]: Deploying daemon mds.cephfs.compute-2.zfggbi on compute-2
Oct 09 09:36:10 compute-1 ceph-mon[9795]: OSD bench result of 22080.768566 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 09 09:36:10 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:36:10 compute-1 ceph-mon[9795]: osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867] boot
Oct 09 09:36:10 compute-1 ceph-mon[9795]: osdmap e34: 3 total, 3 up, 3 in
Oct 09 09:36:10 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:36:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 09:36:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 09:36:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 09:36:10 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 09:36:10 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 09:36:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2574318436' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 09 09:36:10 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:10 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:10 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:10 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjwyle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 09 09:36:10 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjwyle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 09 09:36:10 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e35 e35: 3 total, 3 up, 3 in
Oct 09 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 35 pg[10.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e3 new map
Oct 09 09:36:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e3 print_map
                                          e3
                                          btime 2025-10-09T09:36:10:513915+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        2
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:35:51.790428+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        
                                          up        {}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        0
                                          qdb_cluster        leader: 0 members: 
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-2.zfggbi{-1:14535} state up:standby seq 1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 09:36:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e4 new map
Oct 09 09:36:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e4 print_map
                                          e4
                                          btime 2025-10-09T09:36:10:526987+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        4
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:36:10.526981+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=14535}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        0
                                          qdb_cluster        leader: 0 members: 
                                          [mds.cephfs.compute-2.zfggbi{0:14535} state up:creating seq 1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
Oct 09 09:36:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:11 compute-1 sudo[13818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:11 compute-1 sudo[13818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:11 compute-1 sudo[13818]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:11 compute-1 sudo[13843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:11 compute-1 sudo[13843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:11 compute-1 irqbalance[794]: Cannot change IRQ 44 affinity: Operation not permitted
Oct 09 09:36:11 compute-1 irqbalance[794]: IRQ 44 affinity is now unmanaged
Oct 09 09:36:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Oct 09 09:36:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct 09 09:36:11 compute-1 ceph-mon[9795]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 09:36:11 compute-1 ceph-mon[9795]: Deploying daemon mds.cephfs.compute-0.wjwyle on compute-0
Oct 09 09:36:11 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 09 09:36:11 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 09 09:36:11 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 09 09:36:11 compute-1 ceph-mon[9795]: osdmap e35: 3 total, 3 up, 3 in
Oct 09 09:36:11 compute-1 ceph-mon[9795]: mds.? [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] up:boot
Oct 09 09:36:11 compute-1 ceph-mon[9795]: daemon mds.cephfs.compute-2.zfggbi assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 09 09:36:11 compute-1 ceph-mon[9795]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 09 09:36:11 compute-1 ceph-mon[9795]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 09 09:36:11 compute-1 ceph-mon[9795]: Cluster is now healthy
Oct 09 09:36:11 compute-1 ceph-mon[9795]: fsmap cephfs:0 1 up:standby
Oct 09 09:36:11 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.zfggbi"}]: dispatch
Oct 09 09:36:11 compute-1 ceph-mon[9795]: fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:creating}
Oct 09 09:36:11 compute-1 ceph-mon[9795]: daemon mds.cephfs.compute-2.zfggbi is now active in filesystem cephfs as rank 0
Oct 09 09:36:11 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:11 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:11 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:11 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:11 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.svghvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 09 09:36:11 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.svghvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 09 09:36:11 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e5 new map
Oct 09 09:36:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e5 print_map
                                          e5
                                          btime 2025-10-09T09:36:11:555720+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        5
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:36:11.555718+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=14535}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        0
                                          qdb_cluster        leader: 14535 members: 14535
                                          [mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 09:36:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e6 new map
Oct 09 09:36:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e6 print_map
                                          e6
                                          btime 2025-10-09T09:36:11:561187+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        5
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:36:11.555718+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=14535}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        1
                                          qdb_cluster        leader: 14535 members: 14535
                                          [mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 09:36:11 compute-1 podman[13901]: 2025-10-09 09:36:11.567379229 +0000 UTC m=+0.028399231 container create d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 09 09:36:11 compute-1 systemd[1]: Started libpod-conmon-d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3.scope.
Oct 09 09:36:11 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:36:11 compute-1 podman[13901]: 2025-10-09 09:36:11.620917863 +0000 UTC m=+0.081937875 container init d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lehmann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:36:11 compute-1 podman[13901]: 2025-10-09 09:36:11.625268544 +0000 UTC m=+0.086288547 container start d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lehmann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 09 09:36:11 compute-1 podman[13901]: 2025-10-09 09:36:11.626369661 +0000 UTC m=+0.087389663 container attach d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 09 09:36:11 compute-1 wonderful_lehmann[13914]: 167 167
Oct 09 09:36:11 compute-1 systemd[1]: libpod-d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3.scope: Deactivated successfully.
Oct 09 09:36:11 compute-1 conmon[13914]: conmon d229c1b31e84b143e13c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3.scope/container/memory.events
Oct 09 09:36:11 compute-1 podman[13901]: 2025-10-09 09:36:11.555854391 +0000 UTC m=+0.016874393 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:11 compute-1 podman[13919]: 2025-10-09 09:36:11.662344493 +0000 UTC m=+0.018315401 container died d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lehmann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:36:11 compute-1 systemd[1]: var-lib-containers-storage-overlay-11a49a17acf61386b0f59e23076f64b8c93169d2550219a6a7ce77af364e652e-merged.mount: Deactivated successfully.
Oct 09 09:36:11 compute-1 podman[13919]: 2025-10-09 09:36:11.679400517 +0000 UTC m=+0.035371416 container remove d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:36:11 compute-1 systemd[1]: libpod-conmon-d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3.scope: Deactivated successfully.
Oct 09 09:36:11 compute-1 systemd[1]: Reloading.
Oct 09 09:36:11 compute-1 systemd-rc-local-generator[13951]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:11 compute-1 systemd-sysv-generator[13955]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:11 compute-1 systemd[1]: Reloading.
Oct 09 09:36:11 compute-1 systemd-sysv-generator[13995]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:11 compute-1 systemd-rc-local-generator[13992]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:12 compute-1 systemd[1]: Starting Ceph mds.cephfs.compute-1.svghvn for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:36:12 compute-1 podman[14047]: 2025-10-09 09:36:12.25074578 +0000 UTC m=+0.025355843 container create fb756edb7283d84213bd667f395c4b27ab3945bcd18c5610b45b94e654cf545d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-1-svghvn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:36:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ef9511872f12340b6a4a1dbff3bb394d4998224e35c537fe7724b2b233ebd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ef9511872f12340b6a4a1dbff3bb394d4998224e35c537fe7724b2b233ebd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ef9511872f12340b6a4a1dbff3bb394d4998224e35c537fe7724b2b233ebd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ef9511872f12340b6a4a1dbff3bb394d4998224e35c537fe7724b2b233ebd0/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.svghvn supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:12 compute-1 podman[14047]: 2025-10-09 09:36:12.285538643 +0000 UTC m=+0.060148716 container init fb756edb7283d84213bd667f395c4b27ab3945bcd18c5610b45b94e654cf545d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-1-svghvn, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 09 09:36:12 compute-1 podman[14047]: 2025-10-09 09:36:12.291623585 +0000 UTC m=+0.066233638 container start fb756edb7283d84213bd667f395c4b27ab3945bcd18c5610b45b94e654cf545d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-1-svghvn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:36:12 compute-1 bash[14047]: fb756edb7283d84213bd667f395c4b27ab3945bcd18c5610b45b94e654cf545d
Oct 09 09:36:12 compute-1 podman[14047]: 2025-10-09 09:36:12.239958554 +0000 UTC m=+0.014568627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:12 compute-1 systemd[1]: Started Ceph mds.cephfs.compute-1.svghvn for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:36:12 compute-1 ceph-mds[14063]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 09:36:12 compute-1 ceph-mds[14063]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Oct 09 09:36:12 compute-1 ceph-mds[14063]: main not setting numa affinity
Oct 09 09:36:12 compute-1 ceph-mds[14063]: pidfile_write: ignore empty --pid-file
Oct 09 09:36:12 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-1-svghvn[14059]: starting mds.cephfs.compute-1.svghvn at 
Oct 09 09:36:12 compute-1 sudo[13843]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:12 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Updating MDS map to version 6 from mon.2
Oct 09 09:36:12 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Oct 09 09:36:12 compute-1 ceph-mon[9795]: pgmap v21: 41 pgs: 7 peering, 2 unknown, 32 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:12 compute-1 ceph-mon[9795]: Deploying daemon mds.cephfs.compute-1.svghvn on compute-1
Oct 09 09:36:12 compute-1 ceph-mon[9795]: osdmap e36: 3 total, 3 up, 3 in
Oct 09 09:36:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 09:36:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 09:36:12 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 09:36:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 09:36:12 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 09:36:12 compute-1 ceph-mon[9795]: mds.? [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] up:active
Oct 09 09:36:12 compute-1 ceph-mon[9795]: mds.? [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] up:boot
Oct 09 09:36:12 compute-1 ceph-mon[9795]: fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 1 up:standby
Oct 09 09:36:12 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.wjwyle"}]: dispatch
Oct 09 09:36:12 compute-1 ceph-mon[9795]: fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 1 up:standby
Oct 09 09:36:12 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:12 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:12 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:12 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:12 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 09 09:36:12 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 09 09:36:12 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 09 09:36:12 compute-1 ceph-mon[9795]: osdmap e37: 3 total, 3 up, 3 in
Oct 09 09:36:12 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e7 new map
Oct 09 09:36:12 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e7 print_map
                                          e7
                                          btime 2025-10-09T09:36:12:564873+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        5
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:36:11.555718+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=14535}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        1
                                          qdb_cluster        leader: 14535 members: 14535
                                          [mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
                                          [mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 09:36:12 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Updating MDS map to version 7 from mon.2
Oct 09 09:36:12 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Monitors have assigned me to become a standby
Oct 09 09:36:13 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Oct 09 09:36:13 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct 09 09:36:13 compute-1 ceph-mon[9795]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 09:36:13 compute-1 ceph-mon[9795]: Deploying daemon alertmanager.compute-0 on compute-0
Oct 09 09:36:13 compute-1 ceph-mon[9795]: mds.? [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] up:boot
Oct 09 09:36:13 compute-1 ceph-mon[9795]: fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 2 up:standby
Oct 09 09:36:13 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.svghvn"}]: dispatch
Oct 09 09:36:13 compute-1 ceph-mon[9795]: osdmap e38: 3 total, 3 up, 3 in
Oct 09 09:36:13 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 38 pg[12.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [0] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Oct 09 09:36:14 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 39 pg[12.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [0] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct 09 09:36:14 compute-1 ceph-mon[9795]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 09:36:14 compute-1 ceph-mon[9795]: pgmap v24: 42 pgs: 1 creating+peering, 7 peering, 34 active+clean; 452 KiB data, 480 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 5.2 KiB/s wr, 21 op/s
Oct 09 09:36:14 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 09:36:14 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 09:36:14 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 09:36:14 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 09:36:14 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 09:36:14 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 09 09:36:14 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 09 09:36:14 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 09 09:36:14 compute-1 ceph-mon[9795]: osdmap e39: 3 total, 3 up, 3 in
Oct 09 09:36:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 09 09:36:15 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 09 09:36:15 compute-1 ceph-mon[9795]: osdmap e40: 3 total, 3 up, 3 in
Oct 09 09:36:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e8 new map
Oct 09 09:36:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e8 print_map
                                          e8
                                          btime 2025-10-09T09:36:15:540254+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        8
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:36:14.585925+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=14535}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        1
                                          qdb_cluster        leader: 14535 members: 14535
                                          [mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
                                          [mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 09:36:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:15 compute-1 radosgw[13231]: v1 topic migration: starting v1 topic migration..
Oct 09 09:36:15 compute-1 radosgw[13231]: LDAP not started since no server URIs were provided in the configuration.
Oct 09 09:36:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-1-fxnvnn[13227]: 2025-10-09T09:36:15.600+0000 7ff35366c980 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 09 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-1 radosgw[13231]: v1 topic migration: finished v1 topic migration
Oct 09 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-1 radosgw[13231]: framework: beast
Oct 09 09:36:15 compute-1 radosgw[13231]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 09 09:36:15 compute-1 radosgw[13231]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 09 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-1 radosgw[13231]: starting handler: beast
Oct 09 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-1 radosgw[13231]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 09:36:15 compute-1 radosgw[13231]: mgrc service_daemon_register rgw.24296 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.fxnvnn,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865152,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=773beadf-adcd-43ff-a482-a2d7a5b40bd8,zone_name=default,zonegroup_id=74fea7f9-d931-4447-a756-db2299521313,zonegroup_name=default}
Oct 09 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct 09 09:36:16 compute-1 ceph-mon[9795]: pgmap v27: 43 pgs: 1 unknown, 1 creating+peering, 7 peering, 34 active+clean; 452 KiB data, 480 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 5.2 KiB/s wr, 21 op/s
Oct 09 09:36:16 compute-1 ceph-mon[9795]: Regenerating cephadm self-signed grafana TLS certificates
Oct 09 09:36:16 compute-1 ceph-mon[9795]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 09 09:36:16 compute-1 ceph-mon[9795]: Deploying daemon grafana.compute-0 on compute-0
Oct 09 09:36:16 compute-1 ceph-mon[9795]: mds.? [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] up:active
Oct 09 09:36:16 compute-1 ceph-mon[9795]: mds.? [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] up:standby
Oct 09 09:36:16 compute-1 ceph-mon[9795]: fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 2 up:standby
Oct 09 09:36:16 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:16 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e9 new map
Oct 09 09:36:16 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e9 print_map
                                          e9
                                          btime 2025-10-09T09:36:16:832969+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        8
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:36:14.585925+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=14535}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        1
                                          qdb_cluster        leader: 14535 members: 14535
                                          [mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
                                          [mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 09:36:16 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Updating MDS map to version 9 from mon.2
Oct 09 09:36:17 compute-1 ceph-mon[9795]: pgmap v29: 43 pgs: 1 unknown, 1 creating+peering, 41 active+clean; 452 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:17 compute-1 ceph-mon[9795]: mds.? [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] up:standby
Oct 09 09:36:17 compute-1 ceph-mon[9795]: fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 2 up:standby
Oct 09 09:36:19 compute-1 ceph-mon[9795]: pgmap v30: 43 pgs: 43 active+clean; 456 KiB data, 485 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 5.7 KiB/s wr, 422 op/s
Oct 09 09:36:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:22 compute-1 ceph-mon[9795]: pgmap v31: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 191 KiB/s rd, 4.7 KiB/s wr, 350 op/s
Oct 09 09:36:22 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:22 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:22 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:22 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:22 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:22 compute-1 ceph-mon[9795]: Deploying daemon haproxy.rgw.default.compute-0.kmcywb on compute-0
Oct 09 09:36:24 compute-1 ceph-mon[9795]: pgmap v32: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 4.1 KiB/s wr, 307 op/s
Oct 09 09:36:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:26 compute-1 ceph-mon[9795]: pgmap v33: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 3.4 KiB/s wr, 253 op/s
Oct 09 09:36:26 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.002000020s ======
Oct 09 09:36:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:27.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Oct 09 09:36:27 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:27 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:27 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:27 compute-1 ceph-mon[9795]: Deploying daemon haproxy.rgw.default.compute-2.gkeojf on compute-2
Oct 09 09:36:27 compute-1 ceph-mon[9795]: pgmap v34: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 3.0 KiB/s wr, 225 op/s
Oct 09 09:36:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:29.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:29 compute-1 ceph-mon[9795]: pgmap v35: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 2.8 KiB/s wr, 211 op/s
Oct 09 09:36:29 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:29 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:29 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:29 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:30.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:30 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 09:36:30 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 09:36:30 compute-1 ceph-mon[9795]: Deploying daemon keepalived.rgw.default.compute-2.tcjodw on compute-2
Oct 09 09:36:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:31.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:31 compute-1 ceph-mon[9795]: pgmap v36: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:32.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:36:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:33.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:36:33 compute-1 ceph-mon[9795]: pgmap v37: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:34.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:35 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:35 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:35 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:35 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 09:36:35 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 09:36:35 compute-1 ceph-mon[9795]: Deploying daemon keepalived.rgw.default.compute-0.uozjha on compute-0
Oct 09 09:36:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:35.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:36 compute-1 ceph-mon[9795]: pgmap v38: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:36:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:36.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:36:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:37.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:38 compute-1 ceph-mon[9795]: pgmap v39: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:38 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:38 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:38 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:38 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:38.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:39 compute-1 ceph-mon[9795]: Deploying daemon prometheus.compute-0 on compute-0
Oct 09 09:36:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:39.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:40 compute-1 ceph-mon[9795]: pgmap v40: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:36:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:40.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:36:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:41.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:41 compute-1 ceph-mon[9795]: pgmap v41: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:41 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:42.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:43.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:43 compute-1 ceph-mon[9795]: pgmap v42: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:43 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:43 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:43 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:43 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct 09 09:36:43 compute-1 ceph-mgr[10116]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 09 09:36:44 compute-1 sshd-session[11501]: Connection closed by 192.168.122.100 port 39476
Oct 09 09:36:44 compute-1 sshd-session[11482]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:36:44 compute-1 systemd[1]: session-18.scope: Deactivated successfully.
Oct 09 09:36:44 compute-1 systemd[1]: session-18.scope: Consumed 4.463s CPU time.
Oct 09 09:36:44 compute-1 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Oct 09 09:36:44 compute-1 systemd-logind[798]: Removed session 18.
Oct 09 09:36:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setuser ceph since I am not root
Oct 09 09:36:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setgroup ceph since I am not root
Oct 09 09:36:44 compute-1 ceph-mgr[10116]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 09:36:44 compute-1 ceph-mgr[10116]: pidfile_write: ignore empty --pid-file
Oct 09 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'alerts'
Oct 09 09:36:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:44.167+0000 7f9352cb8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'balancer'
Oct 09 09:36:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:44.238+0000 7f9352cb8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'cephadm'
Oct 09 09:36:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:36:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:44.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'crash'
Oct 09 09:36:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:44.908+0000 7f9352cb8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'dashboard'
Oct 09 09:36:44 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct 09 09:36:44 compute-1 ceph-mon[9795]: mgrmap e26: compute-0.lwqgfy(active, since 53s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:36:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:36:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:45.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'devicehealth'
Oct 09 09:36:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:45.455+0000 7f9352cb8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 09:36:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 09:36:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 09:36:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]:   from numpy import show_config as show_numpy_config
Oct 09 09:36:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:45.602+0000 7f9352cb8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'influx'
Oct 09 09:36:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:45.666+0000 7f9352cb8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'insights'
Oct 09 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'iostat'
Oct 09 09:36:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:45.794+0000 7f9352cb8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'k8sevents'
Oct 09 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'localpool'
Oct 09 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mirroring'
Oct 09 09:36:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:36:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:46.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'nfs'
Oct 09 09:36:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:46.650+0000 7f9352cb8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'orchestrator'
Oct 09 09:36:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:46.838+0000 7f9352cb8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 09:36:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:46.904+0000 7f9352cb8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_support'
Oct 09 09:36:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:46.962+0000 7f9352cb8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 09:36:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:47.031+0000 7f9352cb8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'progress'
Oct 09 09:36:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:47.093+0000 7f9352cb8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'prometheus'
Oct 09 09:36:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:47.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:47.391+0000 7f9352cb8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rbd_support'
Oct 09 09:36:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:47.476+0000 7f9352cb8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'restful'
Oct 09 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rgw'
Oct 09 09:36:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:47.852+0000 7f9352cb8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rook'
Oct 09 09:36:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:48.337+0000 7f9352cb8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'selftest'
Oct 09 09:36:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:48.400+0000 7f9352cb8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'snap_schedule'
Oct 09 09:36:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:48.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:48.470+0000 7f9352cb8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'stats'
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'status'
Oct 09 09:36:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:48.600+0000 7f9352cb8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telegraf'
Oct 09 09:36:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:48.662+0000 7f9352cb8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telemetry'
Oct 09 09:36:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:48.795+0000 7f9352cb8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 09:36:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:48.985+0000 7f9352cb8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'volumes'
Oct 09 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:49.214+0000 7f9352cb8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'zabbix'
Oct 09 09:36:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:49.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:49.276+0000 7f9352cb8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: mgr load Constructed class from module: dashboard
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: mgr load Constructed class from module: prometheus
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: [dashboard INFO root] server: ssl=no host=:: port=8443
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: [dashboard INFO root] Starting engine...
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: ms_deliver_dispatch: unhandled message 0x56303d943860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: [prometheus INFO root] server_addr: :: server_port: 9283
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: [prometheus INFO root] Starting engine...
Oct 09 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: [09/Oct/2025:09:36:49] ENGINE Bus STARTING
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Bus STARTING
Oct 09 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: CherryPy Checker:
Oct 09 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: The Application mounted at '' has an empty config.
Oct 09 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 
Oct 09 09:36:49 compute-1 ceph-mon[9795]: Standby manager daemon compute-1.etokpp restarted
Oct 09 09:36:49 compute-1 ceph-mon[9795]: Standby manager daemon compute-1.etokpp started
Oct 09 09:36:49 compute-1 ceph-mon[9795]: Standby manager daemon compute-2.takdnm restarted
Oct 09 09:36:49 compute-1 ceph-mon[9795]: Standby manager daemon compute-2.takdnm started
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: [dashboard INFO root] Engine started...
Oct 09 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: [09/Oct/2025:09:36:49] ENGINE Serving on http://:::9283
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Serving on http://:::9283
Oct 09 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: [09/Oct/2025:09:36:49] ENGINE Bus STARTED
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Bus STARTED
Oct 09 09:36:49 compute-1 ceph-mgr[10116]: [prometheus INFO root] Engine started.
Oct 09 09:36:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Oct 09 09:36:49 compute-1 sshd-session[14173]: Accepted publickey for ceph-admin from 192.168.122.100 port 51472 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:36:49 compute-1 systemd-logind[798]: New session 20 of user ceph-admin.
Oct 09 09:36:49 compute-1 systemd[1]: Started Session 20 of User ceph-admin.
Oct 09 09:36:49 compute-1 sshd-session[14173]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:36:50 compute-1 sudo[14177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:50 compute-1 sudo[14177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:50 compute-1 sudo[14177]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:50 compute-1 sudo[14202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:36:50 compute-1 sudo[14202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:50 compute-1 ceph-mon[9795]: mgrmap e27: compute-0.lwqgfy(active, since 58s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:36:50 compute-1 ceph-mon[9795]: Active manager daemon compute-0.lwqgfy restarted
Oct 09 09:36:50 compute-1 ceph-mon[9795]: Activating manager daemon compute-0.lwqgfy
Oct 09 09:36:50 compute-1 ceph-mon[9795]: osdmap e41: 3 total, 3 up, 3 in
Oct 09 09:36:50 compute-1 ceph-mon[9795]: mgrmap e28: compute-0.lwqgfy(active, starting, since 0.0142687s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.zfggbi"}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.wjwyle"}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.svghvn"}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-2.takdnm", "id": "compute-2.takdnm"}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-1.etokpp", "id": "compute-1.etokpp"}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: Manager daemon compute-0.lwqgfy is now available
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct 09 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct 09 09:36:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:50.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:50 compute-1 podman[14283]: 2025-10-09 09:36:50.450022541 +0000 UTC m=+0.036451010 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 09 09:36:50 compute-1 podman[14283]: 2025-10-09 09:36:50.532474061 +0000 UTC m=+0.118902530 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:36:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:50 compute-1 podman[14379]: 2025-10-09 09:36:50.811015024 +0000 UTC m=+0.033777700 container exec 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:36:50 compute-1 podman[14379]: 2025-10-09 09:36:50.817881379 +0000 UTC m=+0.040644033 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:36:50 compute-1 sudo[14202]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:51 compute-1 sudo[14440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:51 compute-1 sudo[14440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:51 compute-1 sudo[14440]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:51 compute-1 sudo[14465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:36:51 compute-1 sudo[14465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:51.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:51 compute-1 sudo[14465]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:51 compute-1 sudo[14519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:51 compute-1 sudo[14519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:51 compute-1 sudo[14519]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:51 compute-1 ceph-mon[9795]: mgrmap e29: compute-0.lwqgfy(active, since 1.02937s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:36:51 compute-1 ceph-mon[9795]: [09/Oct/2025:09:36:50] ENGINE Bus STARTING
Oct 09 09:36:51 compute-1 ceph-mon[9795]: [09/Oct/2025:09:36:50] ENGINE Serving on http://192.168.122.100:8765
Oct 09 09:36:51 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:51 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:51 compute-1 ceph-mon[9795]: [09/Oct/2025:09:36:51] ENGINE Serving on https://192.168.122.100:7150
Oct 09 09:36:51 compute-1 ceph-mon[9795]: [09/Oct/2025:09:36:51] ENGINE Bus STARTED
Oct 09 09:36:51 compute-1 ceph-mon[9795]: [09/Oct/2025:09:36:51] ENGINE Client ('192.168.122.100', 39912) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 09 09:36:51 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:51 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:51 compute-1 sudo[14544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 09 09:36:51 compute-1 sudo[14544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:51 compute-1 sudo[14544]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:36:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:52.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:36:52 compute-1 ceph-mon[9795]: pgmap v4: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 09 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 09 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:52 compute-1 ceph-mon[9795]: mgrmap e30: compute-0.lwqgfy(active, since 2s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:36:53 compute-1 sudo[14585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:36:53 compute-1 sudo[14585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14585]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 sudo[14610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:36:53 compute-1 sudo[14610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14610]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:53.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:53 compute-1 sudo[14635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:36:53 compute-1 sudo[14635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14635]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 sudo[14660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:53 compute-1 sudo[14660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14660]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 sudo[14685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:36:53 compute-1 sudo[14685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14685]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 sudo[14733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:36:53 compute-1 sudo[14733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14733]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 sudo[14758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:36:53 compute-1 sudo[14758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14758]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 sudo[14783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 09 09:36:53 compute-1 sudo[14783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14783]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 sudo[14808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:36:53 compute-1 sudo[14808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14808]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 sudo[14833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:36:53 compute-1 sudo[14833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14833]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 sudo[14858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:36:53 compute-1 sudo[14858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14858]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 sudo[14883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:53 compute-1 sudo[14883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14883]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:53 compute-1 sudo[14908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 09 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:36:53 compute-1 sudo[14908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14908]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 sudo[14956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:36:53 compute-1 sudo[14956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14956]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 sudo[14981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:36:53 compute-1 sudo[14981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[14981]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 sudo[15006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:36:53 compute-1 sudo[15006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[15006]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-1 sudo[15031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:36:53 compute-1 sudo[15031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-1 sudo[15031]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:36:54 compute-1 sudo[15056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15056]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-1 sudo[15081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15081]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:54 compute-1 sudo[15106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15106]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-1 sudo[15131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15131]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-1 sudo[15179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15179]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-1 sudo[15204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15204]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 09 09:36:54 compute-1 sudo[15229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15229]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:36:54 compute-1 sudo[15254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15254]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:36:54 compute-1 sudo[15279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15279]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-1 sudo[15304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15304]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:54.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:54 compute-1 sudo[15329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:54 compute-1 sudo[15329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15329]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-1 sudo[15354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15354]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-1 sudo[15402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15402]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-1 sudo[15427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15427]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:36:54 compute-1 sudo[15452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15452]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 ceph-mon[9795]: Updating compute-0:/etc/ceph/ceph.conf
Oct 09 09:36:54 compute-1 ceph-mon[9795]: Updating compute-1:/etc/ceph/ceph.conf
Oct 09 09:36:54 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.conf
Oct 09 09:36:54 compute-1 ceph-mon[9795]: pgmap v5: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:54 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:36:54 compute-1 ceph-mon[9795]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:36:54 compute-1 ceph-mon[9795]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:36:54 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:36:54 compute-1 ceph-mon[9795]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:36:54 compute-1 ceph-mon[9795]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:36:54 compute-1 ceph-mon[9795]: mgrmap e31: compute-0.lwqgfy(active, since 4s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 09 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:54 compute-1 sudo[15477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:54 compute-1 sudo[15477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-1 sudo[15477]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-1 sudo[15502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:54 compute-1 sudo[15502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:55 compute-1 podman[15561]: 2025-10-09 09:36:55.217022868 +0000 UTC m=+0.026768445 container create f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:36:55 compute-1 systemd[1]: Started libpod-conmon-f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3.scope.
Oct 09 09:36:55 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:36:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:36:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:55.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:36:55 compute-1 podman[15561]: 2025-10-09 09:36:55.261537669 +0000 UTC m=+0.071283256 container init f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_heisenberg, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:36:55 compute-1 podman[15561]: 2025-10-09 09:36:55.266172236 +0000 UTC m=+0.075917814 container start f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:36:55 compute-1 podman[15561]: 2025-10-09 09:36:55.267189934 +0000 UTC m=+0.076935511 container attach f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 09 09:36:55 compute-1 ecstatic_heisenberg[15574]: 167 167
Oct 09 09:36:55 compute-1 systemd[1]: libpod-f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3.scope: Deactivated successfully.
Oct 09 09:36:55 compute-1 conmon[15574]: conmon f229624d541024804a0b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3.scope/container/memory.events
Oct 09 09:36:55 compute-1 podman[15561]: 2025-10-09 09:36:55.269736014 +0000 UTC m=+0.079481592 container died f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 09 09:36:55 compute-1 systemd[1]: var-lib-containers-storage-overlay-ae7113e6fa95e0ab257ec017615da95b684af17ae82a5181128d7cd4cc5f503c-merged.mount: Deactivated successfully.
Oct 09 09:36:55 compute-1 podman[15561]: 2025-10-09 09:36:55.293616394 +0000 UTC m=+0.103361971 container remove f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_heisenberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 09 09:36:55 compute-1 podman[15561]: 2025-10-09 09:36:55.206069739 +0000 UTC m=+0.015815316 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:55 compute-1 systemd[1]: libpod-conmon-f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3.scope: Deactivated successfully.
Oct 09 09:36:55 compute-1 systemd[1]: Reloading.
Oct 09 09:36:55 compute-1 systemd-rc-local-generator[15607]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:55 compute-1 systemd-sysv-generator[15611]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:55 compute-1 systemd[1]: Reloading.
Oct 09 09:36:55 compute-1 systemd-rc-local-generator[15649]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:55 compute-1 systemd-sysv-generator[15652]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:55 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:36:55 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:36:55 compute-1 ceph-mon[9795]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:36:55 compute-1 ceph-mon[9795]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:36:55 compute-1 ceph-mon[9795]: Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress
                                          service_id: nfs.cephfs
                                          service_name: ingress.nfs.cephfs
                                          placement:
                                            hosts:
                                            - compute-0
                                            - compute-1
                                            - compute-2
                                          spec:
                                            backend_service: nfs.cephfs
                                            enable_haproxy_protocol: true
                                            first_virtual_router_id: 50
                                            frontend_port: 2049
                                            monitor_port: 9049
                                            virtual_ip: 192.168.122.2/24
                                          ''')): max() arg is an empty sequence
                                          Traceback (most recent call last):
                                            File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services
                                              if self._apply_service(spec):
                                            File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service
                                              daemon_spec = svc.prepare_create(daemon_spec)
                                            File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create
                                              return self.haproxy_prepare_create(daemon_spec)
                                            File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create
                                              daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)
                                            File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config
                                              num_ranks = 1 + max(by_rank.keys())
                                          ValueError: max() arg is an empty sequence
Oct 09 09:36:55 compute-1 ceph-mon[9795]: pgmap v6: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:55 compute-1 ceph-mon[9795]: Creating key for client.nfs.cephfs.0.0.compute-1.douegr
Oct 09 09:36:55 compute-1 ceph-mon[9795]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct 09 09:36:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 09 09:36:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 09 09:36:55 compute-1 ceph-mon[9795]: Rados config object exists: conf-nfs.cephfs
Oct 09 09:36:55 compute-1 ceph-mon[9795]: Creating key for client.nfs.cephfs.0.0.compute-1.douegr-rgw
Oct 09 09:36:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 09:36:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 09:36:55 compute-1 ceph-mon[9795]: Bind address in nfs.cephfs.0.0.compute-1.douegr's ganesha conf is defaulting to empty
Oct 09 09:36:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:55 compute-1 ceph-mon[9795]: Deploying daemon nfs.cephfs.0.0.compute-1.douegr on compute-1
Oct 09 09:36:55 compute-1 ceph-mon[9795]: Health check failed: Failed to apply 1 service(s): ingress.nfs.cephfs (CEPHADM_APPLY_SPEC_FAIL)
Oct 09 09:36:55 compute-1 podman[15703]: 2025-10-09 09:36:55.880421867 +0000 UTC m=+0.026015516 container create 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 09 09:36:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da54272e3cb2b611f21fd5875326bda1bac5460ce62e8b705d930ee72235bd8/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da54272e3cb2b611f21fd5875326bda1bac5460ce62e8b705d930ee72235bd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da54272e3cb2b611f21fd5875326bda1bac5460ce62e8b705d930ee72235bd8/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da54272e3cb2b611f21fd5875326bda1bac5460ce62e8b705d930ee72235bd8/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.douegr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:55 compute-1 podman[15703]: 2025-10-09 09:36:55.924869311 +0000 UTC m=+0.070462980 container init 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 09 09:36:55 compute-1 podman[15703]: 2025-10-09 09:36:55.928570008 +0000 UTC m=+0.074163657 container start 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:36:55 compute-1 bash[15703]: 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793
Oct 09 09:36:55 compute-1 podman[15703]: 2025-10-09 09:36:55.869995491 +0000 UTC m=+0.015589160 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:55 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:36:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:55 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 09 09:36:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:55 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 09 09:36:55 compute-1 sudo[15502]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:55 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 09 09:36:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:55 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 09 09:36:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:55 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 09 09:36:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:55 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 09 09:36:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:56 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 09 09:36:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:56 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:36:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:56 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 09 09:36:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:56 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 09 09:36:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:56 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:36:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:56 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:36:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:56.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:56 compute-1 ceph-mon[9795]: Creating key for client.nfs.cephfs.1.0.compute-2.cpioam
Oct 09 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 09 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 09 09:36:56 compute-1 ceph-mon[9795]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct 09 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 09 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 09 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Oct 09 09:36:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:36:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:57.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:36:57 compute-1 ceph-mon[9795]: pgmap v7: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 330 B/s wr, 12 op/s
Oct 09 09:36:57 compute-1 ceph-mon[9795]: osdmap e42: 3 total, 3 up, 3 in
Oct 09 09:36:58 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Oct 09 09:36:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:58.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:59 compute-1 ceph-mon[9795]: osdmap e43: 3 total, 3 up, 3 in
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000001:nfs.cephfs.0: -2
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 09 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:36:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:36:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:59.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:00 compute-1 ceph-mon[9795]: pgmap v10: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 283 B/s wr, 10 op/s
Oct 09 09:37:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 09 09:37:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 09 09:37:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 09:37:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 09:37:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:00.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:00 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:00 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:37:00 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:00 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:37:00 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:00 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:37:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:01 compute-1 ceph-mon[9795]: Rados config object exists: conf-nfs.cephfs
Oct 09 09:37:01 compute-1 ceph-mon[9795]: Creating key for client.nfs.cephfs.1.0.compute-2.cpioam-rgw
Oct 09 09:37:01 compute-1 ceph-mon[9795]: Bind address in nfs.cephfs.1.0.compute-2.cpioam's ganesha conf is defaulting to empty
Oct 09 09:37:01 compute-1 ceph-mon[9795]: Deploying daemon nfs.cephfs.1.0.compute-2.cpioam on compute-2
Oct 09 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 09 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 09 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 09 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 09 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:01.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:02 compute-1 ceph-mon[9795]: Creating key for client.nfs.cephfs.2.0.compute-0.rlqbpy
Oct 09 09:37:02 compute-1 ceph-mon[9795]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct 09 09:37:02 compute-1 ceph-mon[9795]: pgmap v11: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Oct 09 09:37:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:37:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:02.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:37:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:03.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:03 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:37:04 compute-1 ceph-mon[9795]: pgmap v12: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.1 KiB/s wr, 12 op/s
Oct 09 09:37:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 09 09:37:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 09 09:37:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 09:37:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 09:37:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:37:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:04.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:37:05 compute-1 ceph-mon[9795]: Rados config object exists: conf-nfs.cephfs
Oct 09 09:37:05 compute-1 ceph-mon[9795]: Creating key for client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw
Oct 09 09:37:05 compute-1 ceph-mon[9795]: Bind address in nfs.cephfs.2.0.compute-0.rlqbpy's ganesha conf is defaulting to empty
Oct 09 09:37:05 compute-1 ceph-mon[9795]: Deploying daemon nfs.cephfs.2.0.compute-0.rlqbpy on compute-0
Oct 09 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:05.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.044986) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626045051, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 5696, "num_deletes": 258, "total_data_size": 19262450, "memory_usage": 20425240, "flush_reason": "Manual Compaction"}
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Oct 09 09:37:06 compute-1 ceph-mon[9795]: pgmap v13: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 895 B/s wr, 2 op/s
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626065727, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 12329900, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 5701, "table_properties": {"data_size": 12308297, "index_size": 13617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6917, "raw_key_size": 66672, "raw_average_key_size": 24, "raw_value_size": 12254607, "raw_average_value_size": 4451, "num_data_blocks": 604, "num_entries": 2753, "num_filter_entries": 2753, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 1760002515, "file_creation_time": 1760002626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 20764 microseconds, and 14365 cpu microseconds.
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.065756) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 12329900 bytes OK
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.065770) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.067561) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.067572) EVENT_LOG_v1 {"time_micros": 1760002626067569, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.067582) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 19231344, prev total WAL file size 19233248, number of live WAL files 2.
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.070000) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323534' seq:0, type:0; will stop at (end)
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(1648B)]
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626070072, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 12331548, "oldest_snapshot_seqno": -1}
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 2499 keys, 12326269 bytes, temperature: kUnknown
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626088753, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 12326269, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12305323, "index_size": 13605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6277, "raw_key_size": 63153, "raw_average_key_size": 25, "raw_value_size": 12254887, "raw_average_value_size": 4903, "num_data_blocks": 602, "num_entries": 2499, "num_filter_entries": 2499, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760002626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.088863) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 12326269 bytes
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.089146) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 659.1 rd, 658.8 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.8, 0.0 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2758, records dropped: 259 output_compression: NoCompression
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.089159) EVENT_LOG_v1 {"time_micros": 1760002626089154, "job": 4, "event": "compaction_finished", "compaction_time_micros": 18711, "compaction_time_cpu_micros": 14935, "output_level": 6, "num_output_files": 1, "total_output_size": 12326269, "num_input_records": 2758, "num_output_records": 2499, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626090588, "job": 4, "event": "table_file_deletion", "file_number": 14}
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626090622, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 09 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.069926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:37:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.002000020s ======
Oct 09 09:37:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:06.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Oct 09 09:37:06 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:06 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:37:06 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:06 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:37:06 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:06 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:37:06 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:06 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:37:06 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:06 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:37:06 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:06 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:37:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:37:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:07.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:37:08 compute-1 ceph-mon[9795]: pgmap v14: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.9 KiB/s wr, 5 op/s
Oct 09 09:37:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:08 compute-1 sudo[15772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:37:08 compute-1 sudo[15772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:08 compute-1 sudo[15772]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:08 compute-1 sudo[15797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:37:08 compute-1 sudo[15797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:08 compute-1 sudo[15797]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:08 compute-1 sudo[15822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:08 compute-1 sudo[15822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:08 compute-1 sudo[15822]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:08 compute-1 sudo[15847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:37:08 compute-1 sudo[15847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:08.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:08 compute-1 podman[15928]: 2025-10-09 09:37:08.650011015 +0000 UTC m=+0.043002220 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Oct 09 09:37:08 compute-1 podman[15928]: 2025-10-09 09:37:08.730870965 +0000 UTC m=+0.123862180 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 09 09:37:09 compute-1 podman[16023]: 2025-10-09 09:37:09.01320039 +0000 UTC m=+0.034662717 container exec 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:37:09 compute-1 podman[16023]: 2025-10-09 09:37:09.020847928 +0000 UTC m=+0.042310255 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:37:09 compute-1 podman[16098]: 2025-10-09 09:37:09.218978529 +0000 UTC m=+0.032755161 container exec 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:37:09 compute-1 podman[16098]: 2025-10-09 09:37:09.228873093 +0000 UTC m=+0.042649726 container exec_died 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:37:09 compute-1 sudo[15847]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:37:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:09.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:37:10 compute-1 ceph-mon[9795]: pgmap v15: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Oct 09 09:37:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:10 compute-1 sudo[16125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:10 compute-1 sudo[16125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:10 compute-1 sudo[16125]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:10 compute-1 sudo[16150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:37:10 compute-1 sudo[16150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:10.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:37:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:11 compute-1 ceph-mon[9795]: pgmap v16: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Oct 09 09:37:11 compute-1 ceph-mon[9795]: Deploying daemon haproxy.nfs.cephfs.compute-1.oqhtjo on compute-1
Oct 09 09:37:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:37:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:11.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:37:12 compute-1 ceph-mon[9795]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): ingress.nfs.cephfs)
Oct 09 09:37:12 compute-1 ceph-mon[9795]: Cluster is now healthy
Oct 09 09:37:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:12.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:12 compute-1 podman[16207]: 2025-10-09 09:37:12.83672523 +0000 UTC m=+2.241452121 container create 382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600 (image=quay.io/ceph/haproxy:2.3, name=exciting_lehmann)
Oct 09 09:37:12 compute-1 systemd[1]: Started libpod-conmon-382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600.scope.
Oct 09 09:37:12 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:37:12 compute-1 podman[16207]: 2025-10-09 09:37:12.885413499 +0000 UTC m=+2.290140409 container init 382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600 (image=quay.io/ceph/haproxy:2.3, name=exciting_lehmann)
Oct 09 09:37:12 compute-1 podman[16207]: 2025-10-09 09:37:12.89023063 +0000 UTC m=+2.294957521 container start 382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600 (image=quay.io/ceph/haproxy:2.3, name=exciting_lehmann)
Oct 09 09:37:12 compute-1 podman[16207]: 2025-10-09 09:37:12.891342115 +0000 UTC m=+2.296069016 container attach 382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600 (image=quay.io/ceph/haproxy:2.3, name=exciting_lehmann)
Oct 09 09:37:12 compute-1 exciting_lehmann[16305]: 0 0
Oct 09 09:37:12 compute-1 podman[16207]: 2025-10-09 09:37:12.893893916 +0000 UTC m=+2.298620807 container died 382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600 (image=quay.io/ceph/haproxy:2.3, name=exciting_lehmann)
Oct 09 09:37:12 compute-1 systemd[1]: libpod-382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600.scope: Deactivated successfully.
Oct 09 09:37:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-b67043b0ab48d77a326e5afd1e602e3b4cbff68e49c4777c651176fb8a7accc3-merged.mount: Deactivated successfully.
Oct 09 09:37:12 compute-1 podman[16207]: 2025-10-09 09:37:12.912310335 +0000 UTC m=+2.317037226 container remove 382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600 (image=quay.io/ceph/haproxy:2.3, name=exciting_lehmann)
Oct 09 09:37:12 compute-1 podman[16207]: 2025-10-09 09:37:12.82702356 +0000 UTC m=+2.231750472 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 09 09:37:12 compute-1 systemd[1]: libpod-conmon-382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600.scope: Deactivated successfully.
Oct 09 09:37:12 compute-1 systemd[1]: Reloading.
Oct 09 09:37:13 compute-1 systemd-rc-local-generator[16343]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:37:13 compute-1 systemd-sysv-generator[16346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:37:13 compute-1 systemd[1]: Reloading.
Oct 09 09:37:13 compute-1 systemd-rc-local-generator[16384]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:37:13 compute-1 systemd-sysv-generator[16387]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:37:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:13.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:13 compute-1 ceph-mon[9795]: pgmap v17: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.9 KiB/s wr, 7 op/s
Oct 09 09:37:13 compute-1 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-1.oqhtjo for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:37:13 compute-1 podman[16439]: 2025-10-09 09:37:13.508246952 +0000 UTC m=+0.027367334 container create 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct 09 09:37:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743fa8842ec3c099644aaec4ffea3c5649147df8feaf40b47accfee2667036f6/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:13 compute-1 podman[16439]: 2025-10-09 09:37:13.551168269 +0000 UTC m=+0.070288671 container init 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct 09 09:37:13 compute-1 podman[16439]: 2025-10-09 09:37:13.555011694 +0000 UTC m=+0.074132077 container start 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct 09 09:37:13 compute-1 bash[16439]: 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3
Oct 09 09:37:13 compute-1 podman[16439]: 2025-10-09 09:37:13.496590137 +0000 UTC m=+0.015710538 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 09 09:37:13 compute-1 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-1.oqhtjo for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:37:13 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [NOTICE] 281/093713 (2) : New worker #1 (4) forked
Oct 09 09:37:13 compute-1 kernel: ganesha.nfsd[16461]: segfault at 50 ip 00007f0c6750332e sp 00007f0c25ffa210 error 4 likely on CPU 3 (core 0, socket 3)
Oct 09 09:37:13 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 09 09:37:13 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:13 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bbc000df0 fd 37 proxy ignored for local
Oct 09 09:37:13 compute-1 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct 09 09:37:13 compute-1 sudo[16150]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:13 compute-1 systemd[1]: Started Process Core Dump (PID 16464/UID 0).
Oct 09 09:37:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:14.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:14 compute-1 systemd-coredump[16465]: Process 15719 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 52:
                                                   #0  0x00007f0c6750332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Oct 09 09:37:14 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:14 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:14 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:14 compute-1 ceph-mon[9795]: Deploying daemon haproxy.nfs.cephfs.compute-0.ujrhwc on compute-0
Oct 09 09:37:14 compute-1 systemd[1]: systemd-coredump@0-16464-0.service: Deactivated successfully.
Oct 09 09:37:14 compute-1 podman[16472]: 2025-10-09 09:37:14.660380308 +0000 UTC m=+0.017571207 container died 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:37:14 compute-1 systemd[1]: var-lib-containers-storage-overlay-5da54272e3cb2b611f21fd5875326bda1bac5460ce62e8b705d930ee72235bd8-merged.mount: Deactivated successfully.
Oct 09 09:37:14 compute-1 podman[16472]: 2025-10-09 09:37:14.677341993 +0000 UTC m=+0.034532883 container remove 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 09 09:37:14 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Main process exited, code=exited, status=139/n/a
Oct 09 09:37:14 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Failed with result 'exit-code'.
Oct 09 09:37:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:15.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:15 compute-1 ceph-mon[9795]: pgmap v18: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.9 KiB/s wr, 7 op/s
Oct 09 09:37:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:15 compute-1 ceph-mon[9795]: Deploying daemon haproxy.nfs.cephfs.compute-2.iyubhq on compute-2
Oct 09 09:37:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:16.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:17 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 09:37:17 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 09:37:17 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 09 09:37:17 compute-1 ceph-mon[9795]: Deploying daemon keepalived.nfs.cephfs.compute-2.dgxvnq on compute-2
Oct 09 09:37:17 compute-1 ceph-mon[9795]: pgmap v19: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.9 KiB/s wr, 7 op/s
Oct 09 09:37:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:17.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:17 compute-1 sudo[16504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:17 compute-1 sudo[16504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:17 compute-1 sudo[16504]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:17 compute-1 sudo[16529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:37:17 compute-1 sudo[16529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:18.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:18 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:18 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:18 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:18 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 09 09:37:18 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 09:37:18 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 09:37:18 compute-1 ceph-mon[9795]: Deploying daemon keepalived.nfs.cephfs.compute-1.zabdum on compute-1
Oct 09 09:37:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:19.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:19 compute-1 ceph-mon[9795]: pgmap v20: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 973 B/s wr, 4 op/s
Oct 09 09:37:19 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093719 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:37:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:20.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:37:20 compute-1 podman[16588]: 2025-10-09 09:37:20.598812742 +0000 UTC m=+2.760373948 container create 90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39 (image=quay.io/ceph/keepalived:2.2.4, name=funny_rhodes, vcs-type=git, release=1793, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 09 09:37:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:20 compute-1 systemd[1]: Started libpod-conmon-90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39.scope.
Oct 09 09:37:20 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:37:20 compute-1 podman[16588]: 2025-10-09 09:37:20.655351601 +0000 UTC m=+2.816912827 container init 90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39 (image=quay.io/ceph/keepalived:2.2.4, name=funny_rhodes, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, io.buildah.version=1.28.2, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, version=2.2.4, release=1793)
Oct 09 09:37:20 compute-1 podman[16588]: 2025-10-09 09:37:20.587554007 +0000 UTC m=+2.749115232 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 09 09:37:20 compute-1 podman[16588]: 2025-10-09 09:37:20.660794953 +0000 UTC m=+2.822356148 container start 90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39 (image=quay.io/ceph/keepalived:2.2.4, name=funny_rhodes, io.openshift.expose-services=, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, release=1793, io.buildah.version=1.28.2)
Oct 09 09:37:20 compute-1 podman[16588]: 2025-10-09 09:37:20.66272994 +0000 UTC m=+2.824291147 container attach 90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39 (image=quay.io/ceph/keepalived:2.2.4, name=funny_rhodes, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, release=1793, name=keepalived, version=2.2.4, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, architecture=x86_64, description=keepalived for Ceph, build-date=2023-02-22T09:23:20)
Oct 09 09:37:20 compute-1 funny_rhodes[16672]: 0 0
Oct 09 09:37:20 compute-1 systemd[1]: libpod-90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39.scope: Deactivated successfully.
Oct 09 09:37:20 compute-1 podman[16588]: 2025-10-09 09:37:20.665956674 +0000 UTC m=+2.827517880 container died 90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39 (image=quay.io/ceph/keepalived:2.2.4, name=funny_rhodes, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, name=keepalived, io.openshift.tags=Ceph keepalived, version=2.2.4, description=keepalived for Ceph, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.openshift.expose-services=)
Oct 09 09:37:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-4b46b2208a2f8c074f67b954b7996bb4d1053979c20e54507f1dba52105dc27b-merged.mount: Deactivated successfully.
Oct 09 09:37:20 compute-1 podman[16588]: 2025-10-09 09:37:20.683858954 +0000 UTC m=+2.845420160 container remove 90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39 (image=quay.io/ceph/keepalived:2.2.4, name=funny_rhodes, distribution-scope=public, io.openshift.tags=Ceph keepalived, version=2.2.4, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph)
Oct 09 09:37:20 compute-1 systemd[1]: libpod-conmon-90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39.scope: Deactivated successfully.
Oct 09 09:37:20 compute-1 systemd[1]: Reloading.
Oct 09 09:37:20 compute-1 systemd-rc-local-generator[16715]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:37:20 compute-1 systemd-sysv-generator[16719]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:37:20 compute-1 systemd[1]: Reloading.
Oct 09 09:37:20 compute-1 systemd-rc-local-generator[16751]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:37:20 compute-1 systemd-sysv-generator[16754]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:37:21 compute-1 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-1.zabdum for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:37:21 compute-1 podman[16805]: 2025-10-09 09:37:21.267289504 +0000 UTC m=+0.026296045 container create 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, version=2.2.4, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.k8s.display-name=Keepalived on RHEL 9)
Oct 09 09:37:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:21.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7290c0f81c2587cecd84864f4e16047095eaae794dc2e21771419b98ef5168d2/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:21 compute-1 podman[16805]: 2025-10-09 09:37:21.304254511 +0000 UTC m=+0.063261052 container init 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, release=1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 09 09:37:21 compute-1 podman[16805]: 2025-10-09 09:37:21.307719654 +0000 UTC m=+0.066726185 container start 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.openshift.expose-services=, distribution-scope=public, build-date=2023-02-22T09:23:20, version=2.2.4, name=keepalived, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct 09 09:37:21 compute-1 bash[16805]: 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3
Oct 09 09:37:21 compute-1 podman[16805]: 2025-10-09 09:37:21.256502487 +0000 UTC m=+0.015509038 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 09 09:37:21 compute-1 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-1.zabdum for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 09 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct 09 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 09 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 09 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 09 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: Starting VRRP child process, pid=4
Oct 09 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: Startup complete
Oct 09 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: (VI_0) Entering BACKUP STATE (init)
Oct 09 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: VRRP_Script(check_backend) succeeded
Oct 09 09:37:21 compute-1 sudo[16529]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:21 compute-1 ceph-mon[9795]: pgmap v21: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 973 B/s wr, 4 op/s
Oct 09 09:37:21 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:21 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:21 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:22.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:22 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 09:37:22 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 09 09:37:22 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 09:37:22 compute-1 ceph-mon[9795]: Deploying daemon keepalived.nfs.cephfs.compute-0.qjivil on compute-0
Oct 09 09:37:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:37:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:37:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:37:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:23.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:37:23 compute-1 ceph-mon[9795]: pgmap v22: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 938 B/s wr, 4 op/s
Oct 09 09:37:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:24.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:24 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Scheduled restart job, restart counter is at 1.
Oct 09 09:37:24 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:37:24 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:24 2025: (VI_0) Entering MASTER STATE
Oct 09 09:37:24 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:24 2025: (VI_0) Master received advert from 192.168.122.102 with same priority 90 but higher IP address than ours
Oct 09 09:37:24 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:24 2025: (VI_0) Entering BACKUP STATE
Oct 09 09:37:24 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:37:25 compute-1 podman[16864]: 2025-10-09 09:37:25.116194741 +0000 UTC m=+0.025445963 container create 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 09 09:37:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f865182eb2db54c494aef1da904f162734e7f32df2284cc048bfd6f178c04dd7/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f865182eb2db54c494aef1da904f162734e7f32df2284cc048bfd6f178c04dd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f865182eb2db54c494aef1da904f162734e7f32df2284cc048bfd6f178c04dd7/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f865182eb2db54c494aef1da904f162734e7f32df2284cc048bfd6f178c04dd7/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.douegr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:25 compute-1 podman[16864]: 2025-10-09 09:37:25.150625802 +0000 UTC m=+0.059877044 container init 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:37:25 compute-1 podman[16864]: 2025-10-09 09:37:25.155224671 +0000 UTC m=+0.064475894 container start 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:37:25 compute-1 bash[16864]: 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91
Oct 09 09:37:25 compute-1 podman[16864]: 2025-10-09 09:37:25.105783273 +0000 UTC m=+0.015034516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:37:25 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 09 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 09 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 09 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 09 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 09 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 09 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 09 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:37:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:25.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:25 compute-1 ceph-mon[9795]: pgmap v23: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:37:25 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:25 compute-1 sudo[16919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:37:25 compute-1 sudo[16919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:25 compute-1 sudo[16919]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:25 compute-1 sudo[16944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:25 compute-1 sudo[16944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:25 compute-1 sudo[16944]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:25 compute-1 sudo[16971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:37:25 compute-1 sudo[16971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:25 compute-1 sshd-session[16957]: Accepted publickey for zuul from 192.168.122.30 port 33108 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:37:25 compute-1 systemd-logind[798]: New session 21 of user zuul.
Oct 09 09:37:25 compute-1 systemd[1]: Started Session 21 of User zuul.
Oct 09 09:37:25 compute-1 sshd-session[16957]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:37:26 compute-1 podman[17106]: 2025-10-09 09:37:26.267671538 +0000 UTC m=+0.040062547 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 09 09:37:26 compute-1 podman[17106]: 2025-10-09 09:37:26.342070057 +0000 UTC m=+0.114461066 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:37:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:26.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:26 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:26 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:26 compute-1 podman[17301]: 2025-10-09 09:37:26.651903194 +0000 UTC m=+0.036937137 container exec 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:37:26 compute-1 podman[17301]: 2025-10-09 09:37:26.658880256 +0000 UTC m=+0.043914189 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:37:26 compute-1 python3.9[17268]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:37:26 compute-1 podman[17382]: 2025-10-09 09:37:26.879013959 +0000 UTC m=+0.037600978 container exec 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct 09 09:37:26 compute-1 podman[17382]: 2025-10-09 09:37:26.889875645 +0000 UTC m=+0.048462665 container exec_died 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 09 09:37:27 compute-1 podman[17449]: 2025-10-09 09:37:27.030357325 +0000 UTC m=+0.034439196 container exec 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct 09 09:37:27 compute-1 podman[17467]: 2025-10-09 09:37:27.087798305 +0000 UTC m=+0.045960928 container exec_died 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct 09 09:37:27 compute-1 podman[17449]: 2025-10-09 09:37:27.090859115 +0000 UTC m=+0.094940976 container exec_died 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct 09 09:37:27 compute-1 podman[17501]: 2025-10-09 09:37:27.240156465 +0000 UTC m=+0.050158301 container exec 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.expose-services=, version=2.2.4, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, architecture=x86_64)
Oct 09 09:37:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:27.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:27 compute-1 podman[17533]: 2025-10-09 09:37:27.300769304 +0000 UTC m=+0.045004175 container exec_died 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, architecture=x86_64, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, distribution-scope=public)
Oct 09 09:37:27 compute-1 podman[17501]: 2025-10-09 09:37:27.303019657 +0000 UTC m=+0.113021483 container exec_died 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, version=2.2.4, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, io.openshift.expose-services=, name=keepalived)
Oct 09 09:37:27 compute-1 sudo[16971]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:27 compute-1 ceph-mon[9795]: pgmap v24: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 09 09:37:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:28 compute-1 sudo[17716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpqecvnjppktlnnwkmnzdgfiupysztdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002647.6417675-57-7699481373330/AnsiballZ_command.py'
Oct 09 09:37:28 compute-1 sudo[17716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:37:28 compute-1 sudo[17719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:37:28 compute-1 sudo[17719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:28 compute-1 sudo[17719]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:28 compute-1 python3.9[17718]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:37:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:28.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:29.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:30 compute-1 ceph-mon[9795]: pgmap v25: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 09 09:37:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:30.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:31 compute-1 sudo[17765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:37:31 compute-1 sudo[17765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:31 compute-1 sudo[17765]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:31 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:37:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:31 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:37:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:31.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:32 compute-1 ceph-mon[9795]: pgmap v26: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 09 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 09 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 09 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.lwqgfy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 09 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:32.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:33 compute-1 ceph-mon[9795]: Reconfiguring mon.compute-0 (monmap changed)...
Oct 09 09:37:33 compute-1 ceph-mon[9795]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 09 09:37:33 compute-1 ceph-mon[9795]: Reconfiguring mgr.compute-0.lwqgfy (monmap changed)...
Oct 09 09:37:33 compute-1 ceph-mon[9795]: Reconfiguring daemon mgr.compute-0.lwqgfy on compute-0
Oct 09 09:37:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 09 09:37:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 09 09:37:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:33.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:34 compute-1 ceph-mon[9795]: Reconfiguring crash.compute-0 (monmap changed)...
Oct 09 09:37:34 compute-1 ceph-mon[9795]: Reconfiguring daemon crash.compute-0 on compute-0
Oct 09 09:37:34 compute-1 ceph-mon[9795]: pgmap v27: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 09 09:37:34 compute-1 ceph-mon[9795]: Reconfiguring osd.1 (monmap changed)...
Oct 09 09:37:34 compute-1 ceph-mon[9795]: Reconfiguring daemon osd.1 on compute-0
Oct 09 09:37:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:34 compute-1 sudo[17716]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:34.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:34 compute-1 sshd-session[16997]: Connection closed by 192.168.122.30 port 33108
Oct 09 09:37:34 compute-1 sshd-session[16957]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:37:34 compute-1 systemd[1]: session-21.scope: Deactivated successfully.
Oct 09 09:37:34 compute-1 systemd[1]: session-21.scope: Consumed 6.344s CPU time.
Oct 09 09:37:34 compute-1 systemd-logind[798]: Session 21 logged out. Waiting for processes to exit.
Oct 09 09:37:34 compute-1 systemd-logind[798]: Removed session 21.
Oct 09 09:37:35 compute-1 ceph-mon[9795]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct 09 09:37:35 compute-1 ceph-mon[9795]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct 09 09:37:35 compute-1 ceph-mon[9795]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct 09 09:37:35 compute-1 ceph-mon[9795]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct 09 09:37:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:37:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:35.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:36 compute-1 ceph-mon[9795]: pgmap v28: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 09 09:37:36 compute-1 ceph-mon[9795]: Reconfiguring grafana.compute-0 (dependencies changed)...
Oct 09 09:37:36 compute-1 ceph-mon[9795]: Reconfiguring daemon grafana.compute-0 on compute-0
Oct 09 09:37:36 compute-1 sudo[17832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:36 compute-1 sudo[17832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:36 compute-1 sudo[17832]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:36 compute-1 sudo[17857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:37:36 compute-1 sudo[17857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:36 compute-1 podman[17896]: 2025-10-09 09:37:36.474335179 +0000 UTC m=+0.025915363 container create 94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_shtern, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 09 09:37:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:36.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:36 compute-1 systemd[1]: Started libpod-conmon-94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5.scope.
Oct 09 09:37:36 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:37:36 compute-1 podman[17896]: 2025-10-09 09:37:36.525843784 +0000 UTC m=+0.077423969 container init 94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 09 09:37:36 compute-1 podman[17896]: 2025-10-09 09:37:36.531017994 +0000 UTC m=+0.082598178 container start 94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 09 09:37:36 compute-1 podman[17896]: 2025-10-09 09:37:36.532119407 +0000 UTC m=+0.083699591 container attach 94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_shtern, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:37:36 compute-1 frosty_shtern[17910]: 167 167
Oct 09 09:37:36 compute-1 systemd[1]: libpod-94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5.scope: Deactivated successfully.
Oct 09 09:37:36 compute-1 podman[17896]: 2025-10-09 09:37:36.535309293 +0000 UTC m=+0.086889476 container died 94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_shtern, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:37:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-b653a4a6b71d2881fec17506b17a906a83bfb100db98b19196a8f1544af97ae3-merged.mount: Deactivated successfully.
Oct 09 09:37:36 compute-1 podman[17896]: 2025-10-09 09:37:36.552010099 +0000 UTC m=+0.103590273 container remove 94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_shtern, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 09 09:37:36 compute-1 podman[17896]: 2025-10-09 09:37:36.463473934 +0000 UTC m=+0.015054128 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:37:36 compute-1 systemd[1]: libpod-conmon-94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5.scope: Deactivated successfully.
Oct 09 09:37:36 compute-1 sudo[17857]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:36 compute-1 sudo[17925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:36 compute-1 sudo[17925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:36 compute-1 sudo[17925]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:36 compute-1 sudo[17950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:37:36 compute-1 sudo[17950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:36 compute-1 podman[17989]: 2025-10-09 09:37:36.911720224 +0000 UTC m=+0.031179841 container create 8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ritchie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:37:36 compute-1 systemd[1]: Started libpod-conmon-8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb.scope.
Oct 09 09:37:36 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:37:36 compute-1 podman[17989]: 2025-10-09 09:37:36.960248029 +0000 UTC m=+0.079707646 container init 8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ritchie, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:37:36 compute-1 podman[17989]: 2025-10-09 09:37:36.964065976 +0000 UTC m=+0.083525593 container start 8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 09:37:36 compute-1 podman[17989]: 2025-10-09 09:37:36.965233123 +0000 UTC m=+0.084692740 container attach 8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 09 09:37:36 compute-1 friendly_ritchie[18003]: 167 167
Oct 09 09:37:36 compute-1 systemd[1]: libpod-8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb.scope: Deactivated successfully.
Oct 09 09:37:36 compute-1 podman[17989]: 2025-10-09 09:37:36.967029153 +0000 UTC m=+0.086488770 container died 8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:37:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-a5aae6e8f51bc76d955cf7132007c269f3aa32ffef0bc1f3dd9df84bd66c950d-merged.mount: Deactivated successfully.
Oct 09 09:37:36 compute-1 podman[17989]: 2025-10-09 09:37:36.984703793 +0000 UTC m=+0.104163411 container remove 8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 09 09:37:36 compute-1 podman[17989]: 2025-10-09 09:37:36.898667695 +0000 UTC m=+0.018127311 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:37:36 compute-1 systemd[1]: libpod-conmon-8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb.scope: Deactivated successfully.
Oct 09 09:37:37 compute-1 sudo[17950]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:37 compute-1 sudo[18024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:37 compute-1 sudo[18024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:37 compute-1 sudo[18024]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:37 compute-1 ceph-mon[9795]: Reconfiguring crash.compute-1 (monmap changed)...
Oct 09 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 09 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:37 compute-1 ceph-mon[9795]: Reconfiguring daemon crash.compute-1 on compute-1
Oct 09 09:37:37 compute-1 ceph-mon[9795]: pgmap v29: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 09 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:37 compute-1 ceph-mon[9795]: Reconfiguring osd.0 (monmap changed)...
Oct 09 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 09 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:37 compute-1 ceph-mon[9795]: Reconfiguring daemon osd.0 on compute-1
Oct 09 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 09 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 09 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:37 compute-1 sudo[18049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:37:37 compute-1 sudo[18049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:37:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:37.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:37 compute-1 podman[18102]: 2025-10-09 09:37:37.410740647 +0000 UTC m=+0.026506806 container create 787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lalande, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 09 09:37:37 compute-1 systemd[1]: Started libpod-conmon-787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04.scope.
Oct 09 09:37:37 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:37:37 compute-1 podman[18102]: 2025-10-09 09:37:37.44661226 +0000 UTC m=+0.062378428 container init 787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lalande, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct 09 09:37:37 compute-1 podman[18102]: 2025-10-09 09:37:37.45047453 +0000 UTC m=+0.066240688 container start 787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lalande, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:37:37 compute-1 podman[18102]: 2025-10-09 09:37:37.451604697 +0000 UTC m=+0.067370865 container attach 787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lalande, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:37:37 compute-1 inspiring_lalande[18116]: 167 167
Oct 09 09:37:37 compute-1 systemd[1]: libpod-787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04.scope: Deactivated successfully.
Oct 09 09:37:37 compute-1 podman[18102]: 2025-10-09 09:37:37.454307235 +0000 UTC m=+0.070073392 container died 787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:37:37 compute-1 podman[18102]: 2025-10-09 09:37:37.472238688 +0000 UTC m=+0.088004846 container remove 787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lalande, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 09 09:37:37 compute-1 podman[18102]: 2025-10-09 09:37:37.399441348 +0000 UTC m=+0.015207527 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:37:37 compute-1 systemd[1]: var-lib-containers-storage-overlay-ec81c338c7eadd649349f9a69b40910688d15fe287d3bb55d8ef180f42839dbe-merged.mount: Deactivated successfully.
Oct 09 09:37:37 compute-1 systemd[1]: libpod-conmon-787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04.scope: Deactivated successfully.
Oct 09 09:37:37 compute-1 sudo[18049]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda30000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:38 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:38 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda20001950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:38.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:38 compute-1 ceph-mon[9795]: Reconfiguring mon.compute-1 (monmap changed)...
Oct 09 09:37:38 compute-1 ceph-mon[9795]: Reconfiguring daemon mon.compute-1 on compute-1
Oct 09 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:38 compute-1 ceph-mon[9795]: Reconfiguring mon.compute-2 (monmap changed)...
Oct 09 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 09 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 09 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:38 compute-1 ceph-mon[9795]: Reconfiguring daemon mon.compute-2 on compute-2
Oct 09 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:38 compute-1 ceph-mon[9795]: Reconfiguring mgr.compute-2.takdnm (monmap changed)...
Oct 09 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 09 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:38 compute-1 ceph-mon[9795]: Reconfiguring daemon mgr.compute-2.takdnm on compute-2
Oct 09 09:37:38 compute-1 sudo[18131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:38 compute-1 sudo[18131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:38 compute-1 sudo[18131]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:38 compute-1 sudo[18156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:37:38 compute-1 sudo[18156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:38 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:38 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda24001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:39 compute-1 podman[18236]: 2025-10-09 09:37:39.048847464 +0000 UTC m=+0.038556878 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 09 09:37:39 compute-1 podman[18236]: 2025-10-09 09:37:39.124388367 +0000 UTC m=+0.114097771 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 09 09:37:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:39.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:39 compute-1 podman[18331]: 2025-10-09 09:37:39.421802981 +0000 UTC m=+0.034931414 container exec 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:37:39 compute-1 podman[18331]: 2025-10-09 09:37:39.429904671 +0000 UTC m=+0.043033104 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:37:39 compute-1 ceph-mon[9795]: pgmap v30: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 09 09:37:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 09 09:37:39 compute-1 ceph-mon[9795]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 09 09:37:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 09 09:37:39 compute-1 ceph-mon[9795]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 09 09:37:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 09 09:37:39 compute-1 ceph-mon[9795]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 09 09:37:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093739 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:37:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:39 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda20002270 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:39 compute-1 podman[18405]: 2025-10-09 09:37:39.648433717 +0000 UTC m=+0.037481463 container exec 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:37:39 compute-1 podman[18405]: 2025-10-09 09:37:39.654985999 +0000 UTC m=+0.044033767 container exec_died 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 09 09:37:39 compute-1 podman[18456]: 2025-10-09 09:37:39.808735488 +0000 UTC m=+0.048128014 container exec 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct 09 09:37:39 compute-1 podman[18456]: 2025-10-09 09:37:39.818879051 +0000 UTC m=+0.058271578 container exec_died 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct 09 09:37:39 compute-1 podman[18506]: 2025-10-09 09:37:39.953064825 +0000 UTC m=+0.032993045 container exec 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, description=keepalived for Ceph)
Oct 09 09:37:39 compute-1 podman[18506]: 2025-10-09 09:37:39.963885142 +0000 UTC m=+0.043813351 container exec_died 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, com.redhat.component=keepalived-container, architecture=x86_64, name=keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, version=2.2.4, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 09 09:37:39 compute-1 sudo[18156]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:40 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda2c001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:40.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:40 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda20002270 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:41.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:41 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda24002990 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:42 compute-1 ceph-mon[9795]: pgmap v31: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 09 09:37:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:42 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda20002f80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:42.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:42 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda2c0025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:43.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:43 compute-1 kernel: ganesha.nfsd[18078]: segfault at 50 ip 00007fdadf75232e sp 00007fdaad7f9210 error 4 in libntirpc.so.5.8[7fdadf737000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 09 09:37:43 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 09 09:37:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:43 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda20002f80 fd 37 proxy ignored for local
Oct 09 09:37:43 compute-1 systemd[1]: Started Process Core Dump (PID 18535/UID 0).
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.816778) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663816818, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1462, "num_deletes": 251, "total_data_size": 4339487, "memory_usage": 4409664, "flush_reason": "Manual Compaction"}
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663822808, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 2428875, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 5706, "largest_seqno": 7163, "table_properties": {"data_size": 2422768, "index_size": 3178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14731, "raw_average_key_size": 20, "raw_value_size": 2409516, "raw_average_value_size": 3318, "num_data_blocks": 147, "num_entries": 726, "num_filter_entries": 726, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002626, "oldest_key_time": 1760002626, "file_creation_time": 1760002663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 6324 microseconds, and 3908 cpu microseconds.
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823106) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 2428875 bytes OK
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823202) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823719) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823730) EVENT_LOG_v1 {"time_micros": 1760002663823727, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823742) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 4332194, prev total WAL file size 4332194, number of live WAL files 2.
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824948) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(2371KB)], [15(11MB)]
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663824969, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 14755144, "oldest_snapshot_seqno": -1}
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 2699 keys, 13382293 bytes, temperature: kUnknown
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663857326, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 13382293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13360127, "index_size": 14313, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6789, "raw_key_size": 68577, "raw_average_key_size": 25, "raw_value_size": 13305993, "raw_average_value_size": 4929, "num_data_blocks": 634, "num_entries": 2699, "num_filter_entries": 2699, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760002663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.857643) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13382293 bytes
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.858173) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 453.2 rd, 411.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 11.8 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(11.6) write-amplify(5.5) OK, records in: 3225, records dropped: 526 output_compression: NoCompression
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.858189) EVENT_LOG_v1 {"time_micros": 1760002663858182, "job": 6, "event": "compaction_finished", "compaction_time_micros": 32560, "compaction_time_cpu_micros": 16979, "output_level": 6, "num_output_files": 1, "total_output_size": 13382293, "num_input_records": 3225, "num_output_records": 2699, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663858924, "job": 6, "event": "table_file_deletion", "file_number": 17}
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663860354, "job": 6, "event": "table_file_deletion", "file_number": 15}
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.860468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.860472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.860474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.860475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.860476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:37:43 compute-1 sudo[18537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:37:43 compute-1 sudo[18537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:43 compute-1 sudo[18537]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:44 compute-1 ceph-mon[9795]: pgmap v32: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 09 09:37:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:44.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:44 compute-1 systemd-coredump[18536]: Process 16880 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 45:
                                                   #0  0x00007fdadf75232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Oct 09 09:37:44 compute-1 systemd[1]: systemd-coredump@1-18535-0.service: Deactivated successfully.
Oct 09 09:37:44 compute-1 podman[18569]: 2025-10-09 09:37:44.672989095 +0000 UTC m=+0.020896104 container died 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 09 09:37:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-f865182eb2db54c494aef1da904f162734e7f32df2284cc048bfd6f178c04dd7-merged.mount: Deactivated successfully.
Oct 09 09:37:44 compute-1 podman[18569]: 2025-10-09 09:37:44.689516505 +0000 UTC m=+0.037423515 container remove 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:37:44 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Main process exited, code=exited, status=139/n/a
Oct 09 09:37:44 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Failed with result 'exit-code'.
Oct 09 09:37:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:45.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:46 compute-1 ceph-mon[9795]: pgmap v33: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:37:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:46.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:47.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:48 compute-1 ceph-mon[9795]: pgmap v34: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:37:48 compute-1 sudo[18604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:37:48 compute-1 sudo[18604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:48 compute-1 sudo[18604]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:48.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:49.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093749 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:37:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Oct 09 09:37:50 compute-1 ceph-mon[9795]: pgmap v35: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 09 09:37:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:37:50 compute-1 sshd-session[18630]: Accepted publickey for zuul from 192.168.122.30 port 35844 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:37:50 compute-1 systemd-logind[798]: New session 22 of user zuul.
Oct 09 09:37:50 compute-1 systemd[1]: Started Session 22 of User zuul.
Oct 09 09:37:50 compute-1 sshd-session[18630]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:37:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:50.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Oct 09 09:37:51 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:51 compute-1 ceph-mon[9795]: osdmap e44: 3 total, 3 up, 3 in
Oct 09 09:37:51 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:51 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:51 compute-1 python3.9[18783]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 09 09:37:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:51.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:52 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Oct 09 09:37:52 compute-1 ceph-mon[9795]: pgmap v37: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 B/s wr, 0 op/s
Oct 09 09:37:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:52 compute-1 ceph-mon[9795]: osdmap e45: 3 total, 3 up, 3 in
Oct 09 09:37:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:52 compute-1 ceph-mon[9795]: osdmap e46: 3 total, 3 up, 3 in
Oct 09 09:37:52 compute-1 python3.9[18958]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:37:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:52.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Oct 09 09:37:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 09 09:37:53 compute-1 ceph-mon[9795]: 3.1e scrub starts
Oct 09 09:37:53 compute-1 ceph-mon[9795]: 3.1e scrub ok
Oct 09 09:37:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 09 09:37:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:53 compute-1 ceph-mon[9795]: osdmap e47: 3 total, 3 up, 3 in
Oct 09 09:37:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:53 compute-1 sudo[19112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dblhjortqreohkyedfxadttbyrgvkdkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002672.679264-94-280609417916569/AnsiballZ_command.py'
Oct 09 09:37:53 compute-1 sudo[19112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:37:53 compute-1 python3.9[19114]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:37:53 compute-1 sudo[19112]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:53.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:53 compute-1 sudo[19266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lopwhmialkpttcjtdyvthrenwttbrqom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002673.6226094-130-16355321112500/AnsiballZ_stat.py'
Oct 09 09:37:53 compute-1 sudo[19266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:37:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Oct 09 09:37:54 compute-1 ceph-mon[9795]: pgmap v40: 74 pgs: 31 unknown, 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:37:54 compute-1 ceph-mon[9795]: 3.16 deep-scrub starts
Oct 09 09:37:54 compute-1 ceph-mon[9795]: 3.16 deep-scrub ok
Oct 09 09:37:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:54 compute-1 ceph-mon[9795]: osdmap e48: 3 total, 3 up, 3 in
Oct 09 09:37:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:54 compute-1 python3.9[19268]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:37:54 compute-1 sudo[19266]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:54 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093754 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:37:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:54.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:54 compute-1 sudo[19420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpzgbfnusmlabfxjzqbdffotjybyvuvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002674.396217-163-212670900793081/AnsiballZ_file.py'
Oct 09 09:37:54 compute-1 sudo[19420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:37:54 compute-1 python3.9[19422]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:37:54 compute-1 sudo[19420]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:54 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Scheduled restart job, restart counter is at 2.
Oct 09 09:37:54 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:37:54 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:37:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Oct 09 09:37:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 49 pg[7.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=49 pruub=10.453458786s) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active pruub 193.927215576s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:37:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 49 pg[7.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=49 pruub=10.453458786s) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown pruub 193.927215576s@ mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:55 compute-1 ceph-mon[9795]: 3.18 scrub starts
Oct 09 09:37:55 compute-1 ceph-mon[9795]: 3.18 scrub ok
Oct 09 09:37:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 09 09:37:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 09 09:37:55 compute-1 ceph-mon[9795]: osdmap e49: 3 total, 3 up, 3 in
Oct 09 09:37:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:55 compute-1 podman[19537]: 2025-10-09 09:37:55.142752122 +0000 UTC m=+0.027612848 container create 92d8510d7f5eeffd250cb678b79fb60f427cdb1189e6d98348855aa647b7ea4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:37:55 compute-1 systemd[1268]: Created slice User Background Tasks Slice.
Oct 09 09:37:55 compute-1 systemd[1268]: Starting Cleanup of User's Temporary Files and Directories...
Oct 09 09:37:55 compute-1 systemd[11486]: Starting Mark boot as successful...
Oct 09 09:37:55 compute-1 systemd[11486]: Finished Mark boot as successful.
Oct 09 09:37:55 compute-1 systemd[1268]: Finished Cleanup of User's Temporary Files and Directories.
Oct 09 09:37:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a7a6480f3b33833b923989e2bfc3794283acd4108411bc5cfd7528ec19f604/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a7a6480f3b33833b923989e2bfc3794283acd4108411bc5cfd7528ec19f604/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a7a6480f3b33833b923989e2bfc3794283acd4108411bc5cfd7528ec19f604/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a7a6480f3b33833b923989e2bfc3794283acd4108411bc5cfd7528ec19f604/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.douegr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:55 compute-1 podman[19537]: 2025-10-09 09:37:55.179227982 +0000 UTC m=+0.064088718 container init 92d8510d7f5eeffd250cb678b79fb60f427cdb1189e6d98348855aa647b7ea4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Oct 09 09:37:55 compute-1 podman[19537]: 2025-10-09 09:37:55.184045459 +0000 UTC m=+0.068906185 container start 92d8510d7f5eeffd250cb678b79fb60f427cdb1189e6d98348855aa647b7ea4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:37:55 compute-1 bash[19537]: 92d8510d7f5eeffd250cb678b79fb60f427cdb1189e6d98348855aa647b7ea4d
Oct 09 09:37:55 compute-1 podman[19537]: 2025-10-09 09:37:55.131570735 +0000 UTC m=+0.016431481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:37:55 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 09 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 09 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 09 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 09 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 09 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 09 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 09 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:37:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:37:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:55.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:37:55 compute-1 python3.9[19667]: ansible-ansible.builtin.service_facts Invoked
Oct 09 09:37:55 compute-1 network[19685]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 09:37:55 compute-1 network[19686]: 'network-scripts' will be removed from distribution in near future.
Oct 09 09:37:55 compute-1 network[19687]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 09:37:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1c( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1a( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.19( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.17( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.15( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.16( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.10( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1e( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.e( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.c( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.a( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.8( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.b( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.d( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.2( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.7( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.5( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.14( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.11( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.12( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1d( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1c( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1a( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.19( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.17( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.10( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.16( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1e( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.c( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.15( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.b( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.7( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.d( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.0( empty local-lis/les=49/50 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.14( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.12( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1d( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:56 compute-1 ceph-mon[9795]: pgmap v43: 136 pgs: 93 unknown, 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:37:56 compute-1 ceph-mon[9795]: 4.1a scrub starts
Oct 09 09:37:56 compute-1 ceph-mon[9795]: 4.1a scrub ok
Oct 09 09:37:56 compute-1 ceph-mon[9795]: 3.1b scrub starts
Oct 09 09:37:56 compute-1 ceph-mon[9795]: 3.1b scrub ok
Oct 09 09:37:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:56 compute-1 ceph-mon[9795]: osdmap e50: 3 total, 3 up, 3 in
Oct 09 09:37:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:56 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Oct 09 09:37:56 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Oct 09 09:37:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:56.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Oct 09 09:37:57 compute-1 ceph-mon[9795]: 4.1d deep-scrub starts
Oct 09 09:37:57 compute-1 ceph-mon[9795]: 4.1d deep-scrub ok
Oct 09 09:37:57 compute-1 ceph-mon[9795]: 3.17 deep-scrub starts
Oct 09 09:37:57 compute-1 ceph-mon[9795]: 3.17 deep-scrub ok
Oct 09 09:37:57 compute-1 ceph-mon[9795]: 7.1c deep-scrub starts
Oct 09 09:37:57 compute-1 ceph-mon[9795]: 7.1c deep-scrub ok
Oct 09 09:37:57 compute-1 ceph-mon[9795]: pgmap v46: 182 pgs: 2 peering, 46 unknown, 134 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Oct 09 09:37:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:57 compute-1 ceph-mon[9795]: osdmap e51: 3 total, 3 up, 3 in
Oct 09 09:37:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:57 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct 09 09:37:57 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct 09 09:37:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:57.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:58 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Oct 09 09:37:58 compute-1 ceph-mon[9795]: 4.1b deep-scrub starts
Oct 09 09:37:58 compute-1 ceph-mon[9795]: 4.1b deep-scrub ok
Oct 09 09:37:58 compute-1 ceph-mon[9795]: 3.14 deep-scrub starts
Oct 09 09:37:58 compute-1 ceph-mon[9795]: 3.14 deep-scrub ok
Oct 09 09:37:58 compute-1 ceph-mon[9795]: 7.1b scrub starts
Oct 09 09:37:58 compute-1 ceph-mon[9795]: 7.1b scrub ok
Oct 09 09:37:58 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:58 compute-1 ceph-mon[9795]: osdmap e52: 3 total, 3 up, 3 in
Oct 09 09:37:58 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:58 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct 09 09:37:58 compute-1 python3.9[19951]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:37:58 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct 09 09:37:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:58.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:58 compute-1 python3.9[20101]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:37:59 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Oct 09 09:37:59 compute-1 ceph-mon[9795]: 4.17 scrub starts
Oct 09 09:37:59 compute-1 ceph-mon[9795]: 4.17 scrub ok
Oct 09 09:37:59 compute-1 ceph-mon[9795]: 3.13 scrub starts
Oct 09 09:37:59 compute-1 ceph-mon[9795]: 3.13 scrub ok
Oct 09 09:37:59 compute-1 ceph-mon[9795]: 7.1a scrub starts
Oct 09 09:37:59 compute-1 ceph-mon[9795]: 7.1a scrub ok
Oct 09 09:37:59 compute-1 ceph-mon[9795]: pgmap v49: 244 pgs: 2 peering, 108 unknown, 134 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Oct 09 09:37:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:59 compute-1 ceph-mon[9795]: osdmap e53: 3 total, 3 up, 3 in
Oct 09 09:37:59 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct 09 09:37:59 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct 09 09:37:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:37:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:59.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:00 compute-1 python3.9[20256]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:38:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Oct 09 09:38:00 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 53 pg[10.0( v 40'1059 (0'0,40'1059] local-lis/les=34/35 n=178 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=53 pruub=10.394917488s) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 40'1058 mlcod 40'1058 active pruub 198.924484253s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:00 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.0( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=53 pruub=10.394917488s) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 40'1058 mlcod 0'0 unknown pruub 198.924484253s@ mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-mon[9795]: 4.16 scrub starts
Oct 09 09:38:00 compute-1 ceph-mon[9795]: 4.16 scrub ok
Oct 09 09:38:00 compute-1 ceph-mon[9795]: 3.f scrub starts
Oct 09 09:38:00 compute-1 ceph-mon[9795]: 3.f scrub ok
Oct 09 09:38:00 compute-1 ceph-mon[9795]: 7.18 scrub starts
Oct 09 09:38:00 compute-1 ceph-mon[9795]: 7.18 scrub ok
Oct 09 09:38:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:00 compute-1 ceph-mon[9795]: osdmap e54: 3 total, 3 up, 3 in
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b5087a8 space 0x560c9b2c89d0 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b29b7e8 space 0x560c9b3b5d50 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4f82a8 space 0x560c9b25caa0 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4dc8e8 space 0x560c9ae1b050 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4f9ba8 space 0x560c9b2cbd50 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4dc3e8 space 0x560c9b314b70 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4dd608 space 0x560c9b3a3390 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4cb1a8 space 0x560c9b3a89d0 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e4528 space 0x560c9b29e760 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b2422a8 space 0x560c9b281530 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e5ba8 space 0x560c9b3b7460 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b508fc8 space 0x560c9b3de690 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4ca028 space 0x560c9b2c4830 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e4f28 space 0x560c9b4712c0 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b508de8 space 0x560c9b33dc80 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4f88e8 space 0x560c9b2df940 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4f8208 space 0x560c9b361600 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4cbf68 space 0x560c9b4717a0 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e5568 space 0x560c9b471e20 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4f9388 space 0x560c9b33d050 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4f97e8 space 0x560c9b3160e0 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e5928 space 0x560c9b3dfef0 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4dd568 space 0x560c9b345ae0 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e0528 space 0x560c9b361390 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b509b08 space 0x560c9b4709d0 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e45c8 space 0x560c9b361050 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e4988 space 0x560c9b275940 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4f8f28 space 0x560c9b122010 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4caca8 space 0x560c9b3a9e20 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b509428 space 0x560c9b3df050 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4ca7a8 space 0x560c9b376830 0x0~1000 clean)
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.2( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.3( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.4( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.1( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.5( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.6( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.7( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.8( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.9( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.10( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.11( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.12( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.13( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.14( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.15( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.16( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.17( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.18( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.19( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.1a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.1b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.1c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.1d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.1e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:00.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:00 compute-1 sudo[20412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llaknrnuamupbxjtriylbhktulovbfva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002680.3935013-307-250941793056933/AnsiballZ_setup.py'
Oct 09 09:38:00 compute-1 sudo[20412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:38:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:00 compute-1 python3.9[20414]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:38:01 compute-1 sudo[20412]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:01 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.10 deep-scrub starts
Oct 09 09:38:01 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[12.0( v 40'2 (0'0,40'2] local-lis/les=38/39 n=2 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=55 pruub=13.387430191s) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 40'1 mlcod 40'1 active pruub 202.937469482s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:01 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.10 deep-scrub ok
Oct 09 09:38:01 compute-1 ceph-mon[9795]: 4.14 scrub starts
Oct 09 09:38:01 compute-1 ceph-mon[9795]: 4.14 scrub ok
Oct 09 09:38:01 compute-1 ceph-mon[9795]: 7.19 scrub starts
Oct 09 09:38:01 compute-1 ceph-mon[9795]: 7.19 scrub ok
Oct 09 09:38:01 compute-1 ceph-mon[9795]: 3.0 deep-scrub starts
Oct 09 09:38:01 compute-1 ceph-mon[9795]: 3.0 deep-scrub ok
Oct 09 09:38:01 compute-1 ceph-mon[9795]: pgmap v52: 306 pgs: 2 peering, 170 unknown, 134 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[12.0( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=55 pruub=13.387430191s) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 40'1 mlcod 0'0 unknown pruub 202.937469482s@ mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.0( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 40'1058 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.5( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.4( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:01.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:01 compute-1 sudo[20497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suqjacbxrztupcvfwapqvgymjrzonmws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002680.3935013-307-250941793056933/AnsiballZ_dnf.py'
Oct 09 09:38:01 compute-1 sudo[20497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:38:01 compute-1 python3.9[20499]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:38:02 compute-1 ceph-mon[9795]: 4.13 scrub starts
Oct 09 09:38:02 compute-1 ceph-mon[9795]: 4.13 scrub ok
Oct 09 09:38:02 compute-1 ceph-mon[9795]: 3.e scrub starts
Oct 09 09:38:02 compute-1 ceph-mon[9795]: 3.e scrub ok
Oct 09 09:38:02 compute-1 ceph-mon[9795]: 7.10 deep-scrub starts
Oct 09 09:38:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:38:02 compute-1 ceph-mon[9795]: osdmap e55: 3 total, 3 up, 3 in
Oct 09 09:38:02 compute-1 ceph-mon[9795]: 7.10 deep-scrub ok
Oct 09 09:38:02 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.16 deep-scrub starts
Oct 09 09:38:02 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.14( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.19( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.18( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1a( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.d( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1f( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.e( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.b( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.c( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.9( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.a( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.6( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.8( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.f( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.3( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.2( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.7( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.4( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1b( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.5( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1e( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1d( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1c( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.13( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.12( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.17( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.16( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.15( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.14( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.19( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.18( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1a( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.10( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.d( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.b( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.c( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1f( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.e( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.9( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.0( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 40'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.6( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.a( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.8( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.11( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.2( v 40'2 (0'0,40'2] local-lis/les=55/56 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.f( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1( v 40'2 (0'0,40'2] local-lis/les=55/56 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.7( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.4( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1b( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1e( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.5( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1d( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1c( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.13( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.3( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.17( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.12( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.16( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.15( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.10( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.11( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:02 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.16 deep-scrub ok
Oct 09 09:38:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:02.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:03 compute-1 ceph-mon[9795]: 4.12 scrub starts
Oct 09 09:38:03 compute-1 ceph-mon[9795]: 4.12 scrub ok
Oct 09 09:38:03 compute-1 ceph-mon[9795]: 3.1a scrub starts
Oct 09 09:38:03 compute-1 ceph-mon[9795]: 3.1a scrub ok
Oct 09 09:38:03 compute-1 ceph-mon[9795]: 7.16 deep-scrub starts
Oct 09 09:38:03 compute-1 ceph-mon[9795]: osdmap e56: 3 total, 3 up, 3 in
Oct 09 09:38:03 compute-1 ceph-mon[9795]: 7.16 deep-scrub ok
Oct 09 09:38:03 compute-1 ceph-mon[9795]: pgmap v55: 337 pgs: 31 unknown, 32 peering, 274 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Oct 09 09:38:03 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct 09 09:38:03 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct 09 09:38:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:03 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:38:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:03 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:38:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:38:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:03.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:38:04 compute-1 ceph-mon[9795]: 4.11 scrub starts
Oct 09 09:38:04 compute-1 ceph-mon[9795]: 4.11 scrub ok
Oct 09 09:38:04 compute-1 ceph-mon[9795]: 3.c scrub starts
Oct 09 09:38:04 compute-1 ceph-mon[9795]: 3.c scrub ok
Oct 09 09:38:04 compute-1 ceph-mon[9795]: 7.17 scrub starts
Oct 09 09:38:04 compute-1 ceph-mon[9795]: 7.17 scrub ok
Oct 09 09:38:04 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct 09 09:38:04 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct 09 09:38:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:04.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 09 09:38:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 09 09:38:05 compute-1 ceph-mon[9795]: 4.f scrub starts
Oct 09 09:38:05 compute-1 ceph-mon[9795]: 4.f scrub ok
Oct 09 09:38:05 compute-1 ceph-mon[9795]: 3.15 scrub starts
Oct 09 09:38:05 compute-1 ceph-mon[9795]: 3.15 scrub ok
Oct 09 09:38:05 compute-1 ceph-mon[9795]: 7.f scrub starts
Oct 09 09:38:05 compute-1 ceph-mon[9795]: 7.f scrub ok
Oct 09 09:38:05 compute-1 ceph-mon[9795]: pgmap v56: 337 pgs: 31 unknown, 32 peering, 274 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 985 B/s wr, 2 op/s
Oct 09 09:38:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:38:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:05.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct 09 09:38:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct 09 09:38:06 compute-1 ceph-mon[9795]: 4.c scrub starts
Oct 09 09:38:06 compute-1 ceph-mon[9795]: 4.c scrub ok
Oct 09 09:38:06 compute-1 ceph-mon[9795]: 3.19 scrub starts
Oct 09 09:38:06 compute-1 ceph-mon[9795]: 3.19 scrub ok
Oct 09 09:38:06 compute-1 ceph-mon[9795]: 7.c scrub starts
Oct 09 09:38:06 compute-1 ceph-mon[9795]: 7.c scrub ok
Oct 09 09:38:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:06.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:07 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.a deep-scrub starts
Oct 09 09:38:07 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.a deep-scrub ok
Oct 09 09:38:07 compute-1 ceph-mon[9795]: 4.0 deep-scrub starts
Oct 09 09:38:07 compute-1 ceph-mon[9795]: 4.0 deep-scrub ok
Oct 09 09:38:07 compute-1 ceph-mon[9795]: 3.1f scrub starts
Oct 09 09:38:07 compute-1 ceph-mon[9795]: 3.1f scrub ok
Oct 09 09:38:07 compute-1 ceph-mon[9795]: 7.1e scrub starts
Oct 09 09:38:07 compute-1 ceph-mon[9795]: 7.1e scrub ok
Oct 09 09:38:07 compute-1 ceph-mon[9795]: pgmap v57: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Oct 09 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 09 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 09 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.892608643s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.491363525s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.892586708s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.491363525s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.18( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965552330s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564682007s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.18( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965538979s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564682007s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.892056465s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.491317749s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1a( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965418816s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564682007s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.892033577s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.491317749s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1a( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965390205s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564682007s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.891859055s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.491317749s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.891847610s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.491317749s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954692841s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554214478s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954680443s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554214478s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.14( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.891757011s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.491333008s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.14( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.891747475s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.491333008s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.887513161s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.487167358s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.887503624s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.487167358s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.e( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965021133s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564788818s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954455376s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554244995s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.e( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965010643s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564788818s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954442978s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554244995s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.887171745s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.487075806s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.887162209s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.487075806s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.b( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964696884s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564743042s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.b( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964684486s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564743042s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954156876s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554275513s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.c( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964576721s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564758301s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954094887s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554275513s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.c( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964565277s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564758301s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.9( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964447975s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564788818s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.9( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964435577s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564788818s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.886581421s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.487060547s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.886570930s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.487060547s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.6( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964207649s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564819336s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954155922s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554748535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.6( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964196205s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564819336s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954111099s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554748535s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.b( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.886161804s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486968994s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.b( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.886114120s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486968994s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.8( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963779449s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564849854s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.8( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963766098s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564849854s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.a( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963717461s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564834595s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.a( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963700294s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564834595s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.885774612s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486968994s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.885764122s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486968994s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.953328133s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554748535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.953315735s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554748535s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.3( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964341164s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565902710s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.3( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964330673s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565902710s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.885399818s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486984253s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.885383606s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486984253s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.885258675s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486892700s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.885251045s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486892700s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952424049s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554168701s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952407837s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554168701s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.5( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952911377s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=56'1060 lcod 56'1061 mlcod 56'1061 active pruub 205.554779053s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.2( v 40'2 (0'0,40'2] local-lis/les=55/56 n=1 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963657379s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565536499s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.2( v 40'2 (0'0,40'2] local-lis/les=55/56 n=1 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963648796s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565536499s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884939194s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486892700s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884916306s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486892700s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884834290s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486892700s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.5( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952721596s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=56'1060 lcod 56'1061 mlcod 0'0 unknown NOTIFY pruub 205.554779053s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884822845s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486892700s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952664375s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554794312s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952656746s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554794312s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952675819s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554885864s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.7( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963363647s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565582275s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952667236s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554885864s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884579659s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486877441s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884572029s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486877441s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.7( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963351250s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565582275s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952442169s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554809570s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952435493s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554809570s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.4( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963177681s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565612793s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.4( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963169098s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565612793s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952333450s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554840088s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952325821s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554840088s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.10( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884097099s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486679077s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.10( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884088516s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486679077s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884186745s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486816406s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.883908272s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486816406s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1d( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962710381s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565780640s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1d( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962698936s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565780640s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.951692581s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554916382s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.951681137s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554916382s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1e( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962811470s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565750122s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.16( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.883295059s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486816406s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1c( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962224960s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565780640s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1c( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962195396s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565780640s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.16( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.883276939s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486816406s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.13( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.961986542s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565795898s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.13( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.961974144s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565795898s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.951109886s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.555023193s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.951101303s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.555023193s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.19( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.960655212s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564666748s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.882598877s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486679077s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.19( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.960598946s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564666748s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.882572174s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486679077s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.11( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962583542s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.566802979s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.11( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962574005s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.566802979s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.950589180s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554931641s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.950563431s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554931641s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.12( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962208748s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.566635132s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.12( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962192535s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.566635132s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.10( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962291718s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.566787720s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.10( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962280273s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.566787720s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1e( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.961205482s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565750122s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.882024765s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486663818s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.881998062s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486663818s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.17( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.961801529s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.566635132s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.17( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.961771965s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.566635132s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1d( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.886302948s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.491363525s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1d( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.886291504s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.491363525s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.949922562s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.555007935s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.949909210s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.555007935s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1e( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.881649971s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486816406s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1e( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.881640434s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486816406s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.949749947s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554946899s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.949631691s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554946899s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.12( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.1b( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.17( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.1a( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.1b( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.1a( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.13( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.1c( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.1e( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.1d( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.c( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.18( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.14( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.f( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.1( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.5( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.3( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.7( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.1c( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.2( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.1f( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.1b( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.a( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.13( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.d( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.14( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.c( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.f( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.9( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.10( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.7( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.15( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.16( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.16( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.d( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.1c( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.11( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.4( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.10( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.5( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.a( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.5( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.e( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.18( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:38:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:07.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:38:08 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.12 deep-scrub starts
Oct 09 09:38:08 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.12 deep-scrub ok
Oct 09 09:38:08 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.5( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=56'1060 lcod 56'1061 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.5( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=56'1060 lcod 56'1061 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.1c( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.10( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.12( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.12( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.12( v 56'69 lc 0'0 (0'0,56'69] local-lis/les=57/58 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=56'69 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.1f( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.15( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.18( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.1b( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.14( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.1a( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.18( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.14( v 56'99 lc 40'86 (0'0,56'99] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=56'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.1c( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.15( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.1b( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.13( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.1b( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.18( v 50'68 lc 43'19 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.1a( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.19( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.13( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.1c( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.c( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.d( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.a( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.14( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.5( v 40'96 (0'0,40'96] local-lis/les=57/58 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.a( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.17( v 56'69 lc 0'0 (0'0,56'69] local-lis/les=57/58 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=56'69 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.6( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=57/58 n=1 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.c( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.a( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.d( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.f( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.1( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.f( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.5( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.d( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.4( v 50'68 (0'0,50'68] local-lis/les=57/58 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.7( v 40'96 (0'0,40'96] local-lis/les=57/58 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.9( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.4( v 40'96 (0'0,40'96] local-lis/les=57/58 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.f( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.1( v 40'96 (0'0,40'96] local-lis/les=57/58 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.3( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.e( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.2( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.7( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.8( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.16( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.1b( v 50'68 lc 41'8 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.1d( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.10( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.f( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.10( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.16( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.e( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.1e( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.10( v 54'71 lc 54'70 (0'0,54'71] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=54'71 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.5( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.11( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.11( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-1 ceph-mon[9795]: 4.18 scrub starts
Oct 09 09:38:08 compute-1 ceph-mon[9795]: 4.18 scrub ok
Oct 09 09:38:08 compute-1 ceph-mon[9795]: 3.4 scrub starts
Oct 09 09:38:08 compute-1 ceph-mon[9795]: 3.4 scrub ok
Oct 09 09:38:08 compute-1 ceph-mon[9795]: 7.a deep-scrub starts
Oct 09 09:38:08 compute-1 ceph-mon[9795]: 7.a deep-scrub ok
Oct 09 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 09 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 09 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-1 ceph-mon[9795]: osdmap e57: 3 total, 3 up, 3 in
Oct 09 09:38:08 compute-1 sudo[20552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:38:08 compute-1 sudo[20552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:38:08 compute-1 sudo[20552]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:08.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.945956230s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.555023193s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.945867538s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.555023193s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.945472717s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.554992676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.945446014s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.554992676s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.944999695s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.554885864s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.944981575s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.554885864s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.943867683s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.554534912s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.943853378s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.554534912s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.944041252s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.554870605s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.944030762s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.554870605s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.943557739s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.554870605s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.943533897s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.554870605s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.942595482s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.554321289s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.942577362s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.554321289s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=4 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.940258980s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.552383423s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=4 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.940237045s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.552383423s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[6.a( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[6.2( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:09 compute-1 ceph-mon[9795]: 11.11 scrub starts
Oct 09 09:38:09 compute-1 ceph-mon[9795]: 11.11 scrub ok
Oct 09 09:38:09 compute-1 ceph-mon[9795]: 3.9 deep-scrub starts
Oct 09 09:38:09 compute-1 ceph-mon[9795]: 3.9 deep-scrub ok
Oct 09 09:38:09 compute-1 ceph-mon[9795]: 10.12 deep-scrub starts
Oct 09 09:38:09 compute-1 ceph-mon[9795]: 10.12 deep-scrub ok
Oct 09 09:38:09 compute-1 ceph-mon[9795]: osdmap e58: 3 total, 3 up, 3 in
Oct 09 09:38:09 compute-1 ceph-mon[9795]: pgmap v60: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 836 B/s wr, 2 op/s
Oct 09 09:38:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 09 09:38:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 09 09:38:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 09 09:38:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 09 09:38:09 compute-1 ceph-mon[9795]: osdmap e59: 3 total, 3 up, 3 in
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.5( v 56'1062 (0'0,56'1062] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=56'1062 lcod 56'1061 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:38:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:09.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc740000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:10 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:10 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734001e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=4 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=4 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.008573532s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622787476s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.008420944s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622787476s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007658005s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622253418s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007628441s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622253418s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007069588s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622329712s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007202148s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622451782s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007037163s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622329712s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.5( v 59'1068 (0'0,59'1068] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007274628s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=56'1062 lcod 59'1067 mlcod 59'1067 active pruub 213.622756958s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.5( v 59'1068 (0'0,59'1068] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007144928s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=56'1062 lcod 59'1067 mlcod 0'0 unknown NOTIFY pruub 213.622756958s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.006485939s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622177124s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.006444931s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622177124s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.006690979s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622467041s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.006664276s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622467041s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.006199837s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622085571s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.006181717s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622085571s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.006824493s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622589111s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[6.a( v 41'42 (0'0,41'42] local-lis/les=59/60 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[6.2( v 41'42 (0'0,41'42] local-lis/les=59/60 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[6.6( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=59/60 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=41'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=59/60 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007172585s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622451782s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.005962372s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622589111s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:10 compute-1 ceph-mon[9795]: 4.1e scrub starts
Oct 09 09:38:10 compute-1 ceph-mon[9795]: 4.1e scrub ok
Oct 09 09:38:10 compute-1 ceph-mon[9795]: osdmap e60: 3 total, 3 up, 3 in
Oct 09 09:38:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:10.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:10 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:10 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.168472290s) [2] async=[2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.623825073s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.168426514s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.623825073s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.167302132s) [2] async=[2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.623062134s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.167387009s) [2] async=[2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.623184204s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.167368889s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.623184204s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.166993141s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.623062134s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.166310310s) [2] async=[2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622604370s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.166106224s) [2] async=[2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622558594s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.166025162s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622604370s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.165900230s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622558594s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.165554047s) [2] async=[2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622817993s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.165366173s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622817993s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.165130615s) [2] async=[2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622711182s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.164980888s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622711182s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[6.3( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[6.7( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=4 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-1 ceph-mon[9795]: 11.15 scrub starts
Oct 09 09:38:11 compute-1 ceph-mon[9795]: 11.15 scrub ok
Oct 09 09:38:11 compute-1 ceph-mon[9795]: pgmap v63: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 09 09:38:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 09 09:38:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 09 09:38:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 09 09:38:11 compute-1 ceph-mon[9795]: osdmap e61: 3 total, 3 up, 3 in
Oct 09 09:38:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:11.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:11 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct 09 09:38:11 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct 09 09:38:11 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093811 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:38:11 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:11 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:12 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.150447845s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.609497070s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.150403976s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.609497070s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.150182724s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.609558105s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.150151253s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.609558105s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.149627686s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.609512329s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.149598122s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.609512329s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.149128914s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.609497070s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.149096489s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.609497070s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.149672508s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.610260010s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.149634361s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.610260010s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.148990631s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.609817505s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.148949623s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.609817505s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.148059845s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.609481812s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.148029327s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.609481812s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=4 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.147952080s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.609542847s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=4 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.147924423s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.609542847s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[6.f( v 41'42 lc 35'1 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[6.3( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=2 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[6.7( v 41'42 lc 35'11 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:12 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:12 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:12 compute-1 ceph-mon[9795]: 11.18 scrub starts
Oct 09 09:38:12 compute-1 ceph-mon[9795]: 11.18 scrub ok
Oct 09 09:38:12 compute-1 ceph-mon[9795]: 12.9 scrub starts
Oct 09 09:38:12 compute-1 ceph-mon[9795]: 12.9 scrub ok
Oct 09 09:38:12 compute-1 ceph-mon[9795]: 10.10 scrub starts
Oct 09 09:38:12 compute-1 ceph-mon[9795]: 10.10 scrub ok
Oct 09 09:38:12 compute-1 ceph-mon[9795]: osdmap e62: 3 total, 3 up, 3 in
Oct 09 09:38:12 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:12 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:38:12 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:12 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:38:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:12.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:12 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:12 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:13 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Oct 09 09:38:13 compute-1 ceph-mon[9795]: 11.e scrub starts
Oct 09 09:38:13 compute-1 ceph-mon[9795]: 11.e scrub ok
Oct 09 09:38:13 compute-1 ceph-mon[9795]: pgmap v66: 337 pgs: 5 active+recovery_wait+remapped, 1 active+recovering+remapped, 8 remapped+peering, 1 active+remapped, 9 peering, 313 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22/226 objects misplaced (9.735%); 813 B/s, 2 keys/s, 24 objects/s recovering
Oct 09 09:38:13 compute-1 ceph-mon[9795]: osdmap e63: 3 total, 3 up, 3 in
Oct 09 09:38:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:13.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:13 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct 09 09:38:13 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct 09 09:38:13 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:13 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c002fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:14 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 09 09:38:14 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 09 09:38:14 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:14 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c002fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:14 compute-1 ceph-mon[9795]: 8.a deep-scrub starts
Oct 09 09:38:14 compute-1 ceph-mon[9795]: 8.a deep-scrub ok
Oct 09 09:38:14 compute-1 ceph-mon[9795]: 6.f scrub starts
Oct 09 09:38:14 compute-1 ceph-mon[9795]: 6.f scrub ok
Oct 09 09:38:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:14.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:14 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:14 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:15 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct 09 09:38:15 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct 09 09:38:15 compute-1 ceph-mon[9795]: 10.e scrub starts
Oct 09 09:38:15 compute-1 ceph-mon[9795]: 10.e scrub ok
Oct 09 09:38:15 compute-1 ceph-mon[9795]: 6.7 scrub starts
Oct 09 09:38:15 compute-1 ceph-mon[9795]: 6.7 scrub ok
Oct 09 09:38:15 compute-1 ceph-mon[9795]: 5.e scrub starts
Oct 09 09:38:15 compute-1 ceph-mon[9795]: 5.e scrub ok
Oct 09 09:38:15 compute-1 ceph-mon[9795]: pgmap v68: 337 pgs: 5 active+recovery_wait+remapped, 1 active+recovering+remapped, 8 remapped+peering, 1 active+remapped, 9 peering, 313 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22/226 objects misplaced (9.735%); 798 B/s, 2 keys/s, 24 objects/s recovering
Oct 09 09:38:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:15 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:38:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:38:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:15.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:38:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:15 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734002910 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:16 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Oct 09 09:38:16 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Oct 09 09:38:16 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:16 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c003ec0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:16 compute-1 ceph-mon[9795]: 10.1e scrub starts
Oct 09 09:38:16 compute-1 ceph-mon[9795]: 10.1e scrub ok
Oct 09 09:38:16 compute-1 ceph-mon[9795]: 7.12 scrub starts
Oct 09 09:38:16 compute-1 ceph-mon[9795]: 7.12 scrub ok
Oct 09 09:38:16 compute-1 ceph-mon[9795]: 3.8 scrub starts
Oct 09 09:38:16 compute-1 ceph-mon[9795]: 3.8 scrub ok
Oct 09 09:38:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:16.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:16 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:16 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:17 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.1f deep-scrub starts
Oct 09 09:38:17 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.1f deep-scrub ok
Oct 09 09:38:17 compute-1 ceph-mon[9795]: 10.16 scrub starts
Oct 09 09:38:17 compute-1 ceph-mon[9795]: 10.16 scrub ok
Oct 09 09:38:17 compute-1 ceph-mon[9795]: 12.14 scrub starts
Oct 09 09:38:17 compute-1 ceph-mon[9795]: 12.14 scrub ok
Oct 09 09:38:17 compute-1 ceph-mon[9795]: 5.b deep-scrub starts
Oct 09 09:38:17 compute-1 ceph-mon[9795]: 5.b deep-scrub ok
Oct 09 09:38:17 compute-1 ceph-mon[9795]: pgmap v69: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 580 B/s, 3 keys/s, 24 objects/s recovering
Oct 09 09:38:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 09 09:38:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 09 09:38:17 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Oct 09 09:38:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:17.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:17 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:17 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:18 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.d scrub starts
Oct 09 09:38:18 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.d scrub ok
Oct 09 09:38:18 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:18 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.993220329s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 221.555191040s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.993190765s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.555191040s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.4( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.993136406s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=56'1060 lcod 56'1061 mlcod 56'1061 active pruub 221.555191040s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.4( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.993096352s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=56'1060 lcod 56'1061 mlcod 0'0 unknown NOTIFY pruub 221.555191040s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.992700577s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 221.555023193s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.992686272s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.555023193s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.991698265s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 221.554428101s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.991680145s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.554428101s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:18 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.4( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=56'1060 lcod 56'1061 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.4( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=56'1060 lcod 56'1061 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:18 compute-1 ceph-mon[9795]: 10.2 scrub starts
Oct 09 09:38:18 compute-1 ceph-mon[9795]: 10.2 scrub ok
Oct 09 09:38:18 compute-1 ceph-mon[9795]: 12.1f deep-scrub starts
Oct 09 09:38:18 compute-1 ceph-mon[9795]: 12.1f deep-scrub ok
Oct 09 09:38:18 compute-1 ceph-mon[9795]: 5.d scrub starts
Oct 09 09:38:18 compute-1 ceph-mon[9795]: 5.d scrub ok
Oct 09 09:38:18 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 09 09:38:18 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 09 09:38:18 compute-1 ceph-mon[9795]: osdmap e64: 3 total, 3 up, 3 in
Oct 09 09:38:18 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093818 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:38:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:18.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:18 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:18 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:19 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.0 deep-scrub starts
Oct 09 09:38:19 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.0 deep-scrub ok
Oct 09 09:38:19 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Oct 09 09:38:19 compute-1 ceph-mon[9795]: 9.1a scrub starts
Oct 09 09:38:19 compute-1 ceph-mon[9795]: 9.1a scrub ok
Oct 09 09:38:19 compute-1 ceph-mon[9795]: 12.d scrub starts
Oct 09 09:38:19 compute-1 ceph-mon[9795]: 12.d scrub ok
Oct 09 09:38:19 compute-1 ceph-mon[9795]: 5.4 scrub starts
Oct 09 09:38:19 compute-1 ceph-mon[9795]: 5.4 scrub ok
Oct 09 09:38:19 compute-1 ceph-mon[9795]: osdmap e65: 3 total, 3 up, 3 in
Oct 09 09:38:19 compute-1 ceph-mon[9795]: pgmap v72: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 38 B/s, 1 keys/s, 8 objects/s recovering
Oct 09 09:38:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 09 09:38:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 09 09:38:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 09 09:38:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 09 09:38:19 compute-1 ceph-mon[9795]: osdmap e66: 3 total, 3 up, 3 in
Oct 09 09:38:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:38:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:19.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:38:19 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:19 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:20 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct 09 09:38:20 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 66 pg[6.5( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 66 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 66 pg[6.d( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 66 pg[10.4( v 56'1062 (0'0,56'1062] local-lis/les=65/66 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[53,65)/1 crt=56'1062 lcod 56'1061 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 66 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 66 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:20 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:20 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Oct 09 09:38:20 compute-1 ceph-mon[9795]: 8.1a deep-scrub starts
Oct 09 09:38:20 compute-1 ceph-mon[9795]: 8.1a deep-scrub ok
Oct 09 09:38:20 compute-1 ceph-mon[9795]: 7.0 deep-scrub starts
Oct 09 09:38:20 compute-1 ceph-mon[9795]: 7.0 deep-scrub ok
Oct 09 09:38:20 compute-1 ceph-mon[9795]: 3.1d scrub starts
Oct 09 09:38:20 compute-1 ceph-mon[9795]: 3.1d scrub ok
Oct 09 09:38:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:38:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 09 09:38:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.16( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.4( v 66'1068 (0'0,66'1068] local-lis/les=65/66 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67 pruub=15.846351624s) [2] async=[2] r=-1 lpr=67 pi=[53,67)/1 crt=56'1062 lcod 66'1067 mlcod 66'1067 active pruub 224.549911499s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67 pruub=15.845858574s) [2] async=[2] r=-1 lpr=67 pi=[53,67)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 224.549484253s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=59/60 n=1 ec=49/14 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=13.912847519s) [1] r=-1 lpr=67 pi=[59,67)/1 crt=41'42 mlcod 41'42 active pruub 222.616333008s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67 pruub=15.845821381s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.549484253s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=59/60 n=1 ec=49/14 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=13.912611961s) [1] r=-1 lpr=67 pi=[59,67)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 222.616333008s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.4( v 66'1068 (0'0,66'1068] local-lis/les=65/66 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67 pruub=15.846072197s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=56'1062 lcod 66'1067 mlcod 0'0 unknown NOTIFY pruub 224.549911499s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[6.6( v 41'42 (0'0,41'42] local-lis/les=59/60 n=2 ec=49/14 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=13.912308693s) [1] r=-1 lpr=67 pi=[59,67)/1 crt=41'42 mlcod 41'42 active pruub 222.616333008s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+recovering+remapped mbc={255={(0+1)=2}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[6.6( v 41'42 (0'0,41'42] local-lis/les=59/60 n=2 ec=49/14 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=13.912289619s) [1] r=-1 lpr=67 pi=[59,67)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 222.616333008s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.6( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[6.5( v 41'42 lc 35'6 (0'0,41'42] local-lis/les=66/67 n=2 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[6.d( v 41'42 lc 35'7 (0'0,41'42] local-lis/les=66/67 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:20.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:20 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:20 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c0047e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:20 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093820 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:38:21 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Oct 09 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.16( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.16( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68 pruub=15.092641830s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 224.550186157s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68 pruub=15.092575073s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.550186157s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.6( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.6( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68 pruub=15.091803551s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 224.550109863s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68 pruub=15.091763496s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.550109863s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:21 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 09 09:38:21 compute-1 ceph-mon[9795]: 9.1b deep-scrub starts
Oct 09 09:38:21 compute-1 ceph-mon[9795]: 9.1b deep-scrub ok
Oct 09 09:38:21 compute-1 ceph-mon[9795]: 7.7 scrub starts
Oct 09 09:38:21 compute-1 ceph-mon[9795]: 7.7 scrub ok
Oct 09 09:38:21 compute-1 ceph-mon[9795]: 5.1a scrub starts
Oct 09 09:38:21 compute-1 ceph-mon[9795]: 5.1a scrub ok
Oct 09 09:38:21 compute-1 ceph-mon[9795]: pgmap v74: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 38 B/s, 1 keys/s, 8 objects/s recovering
Oct 09 09:38:21 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 09 09:38:21 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 09 09:38:21 compute-1 ceph-mon[9795]: osdmap e67: 3 total, 3 up, 3 in
Oct 09 09:38:21 compute-1 ceph-mon[9795]: osdmap e68: 3 total, 3 up, 3 in
Oct 09 09:38:21 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 09 09:38:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:38:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:21.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:38:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:21 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c0047e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:22 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Oct 09 09:38:22 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Oct 09 09:38:22 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Oct 09 09:38:22 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:22 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:22 compute-1 ceph-mon[9795]: 9.19 scrub starts
Oct 09 09:38:22 compute-1 ceph-mon[9795]: 9.19 scrub ok
Oct 09 09:38:22 compute-1 ceph-mon[9795]: 7.d scrub starts
Oct 09 09:38:22 compute-1 ceph-mon[9795]: 7.d scrub ok
Oct 09 09:38:22 compute-1 ceph-mon[9795]: osdmap e69: 3 total, 3 up, 3 in
Oct 09 09:38:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:22.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:22 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:22 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:23 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Oct 09 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:23 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct 09 09:38:23 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct 09 09:38:23 compute-1 ceph-mon[9795]: 9.1e scrub starts
Oct 09 09:38:23 compute-1 ceph-mon[9795]: 9.1e scrub ok
Oct 09 09:38:23 compute-1 ceph-mon[9795]: 12.0 scrub starts
Oct 09 09:38:23 compute-1 ceph-mon[9795]: 12.0 scrub ok
Oct 09 09:38:23 compute-1 ceph-mon[9795]: 10.14 deep-scrub starts
Oct 09 09:38:23 compute-1 ceph-mon[9795]: 10.14 deep-scrub ok
Oct 09 09:38:23 compute-1 ceph-mon[9795]: pgmap v78: 337 pgs: 2 active+recovery_wait+remapped, 4 unknown, 4 remapped+peering, 4 peering, 1 active+recovering, 322 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 7/204 objects misplaced (3.431%); 111 B/s, 2 objects/s recovering
Oct 09 09:38:23 compute-1 ceph-mon[9795]: mgrmap e32: compute-0.lwqgfy(active, since 92s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:38:23 compute-1 ceph-mon[9795]: osdmap e70: 3 total, 3 up, 3 in
Oct 09 09:38:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:23.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:23 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:23 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c0047e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:24 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Oct 09 09:38:24 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 71 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:24 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 71 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:24 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 71 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:24 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 71 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:24 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.f scrub starts
Oct 09 09:38:24 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.f scrub ok
Oct 09 09:38:24 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:24 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc740001c70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:24 compute-1 ceph-mon[9795]: 8.1e scrub starts
Oct 09 09:38:24 compute-1 ceph-mon[9795]: 8.1e scrub ok
Oct 09 09:38:24 compute-1 ceph-mon[9795]: 7.1 scrub starts
Oct 09 09:38:24 compute-1 ceph-mon[9795]: 7.1 scrub ok
Oct 09 09:38:24 compute-1 ceph-mon[9795]: 10.1c scrub starts
Oct 09 09:38:24 compute-1 ceph-mon[9795]: 10.1c scrub ok
Oct 09 09:38:24 compute-1 ceph-mon[9795]: osdmap e71: 3 total, 3 up, 3 in
Oct 09 09:38:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:24.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:24 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:24 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:25 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Oct 09 09:38:25 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Oct 09 09:38:25 compute-1 ceph-mon[9795]: 9.1f scrub starts
Oct 09 09:38:25 compute-1 ceph-mon[9795]: 9.1f scrub ok
Oct 09 09:38:25 compute-1 ceph-mon[9795]: 12.f scrub starts
Oct 09 09:38:25 compute-1 ceph-mon[9795]: 12.f scrub ok
Oct 09 09:38:25 compute-1 ceph-mon[9795]: pgmap v81: 337 pgs: 2 active+recovery_wait+remapped, 4 unknown, 4 remapped+peering, 4 peering, 1 active+recovering, 322 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 7/204 objects misplaced (3.431%); 112 B/s, 2 objects/s recovering
Oct 09 09:38:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:25.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:25 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:26 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Oct 09 09:38:26 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Oct 09 09:38:26 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:26 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c0058e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:26 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Oct 09 09:38:26 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 72 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=72) [0] r=0 lpr=72 pi=[61,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:26 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 72 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72) [0] r=0 lpr=72 pi=[60,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:26 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 72 pg[10.7( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72) [0] r=0 lpr=72 pi=[60,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:26 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 72 pg[10.17( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72) [0] r=0 lpr=72 pi=[60,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:26 compute-1 ceph-mon[9795]: 8.1d deep-scrub starts
Oct 09 09:38:26 compute-1 ceph-mon[9795]: 8.1d deep-scrub ok
Oct 09 09:38:26 compute-1 ceph-mon[9795]: 12.1 scrub starts
Oct 09 09:38:26 compute-1 ceph-mon[9795]: 12.1 scrub ok
Oct 09 09:38:26 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 09 09:38:26 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 09 09:38:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:26.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:26 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:26 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc740001c70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:27 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Oct 09 09:38:27 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Oct 09 09:38:27 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Oct 09 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.17( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.7( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.7( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[61,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[61,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.17( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:27 compute-1 ceph-mon[9795]: 9.1c scrub starts
Oct 09 09:38:27 compute-1 ceph-mon[9795]: 9.1c scrub ok
Oct 09 09:38:27 compute-1 ceph-mon[9795]: 12.5 scrub starts
Oct 09 09:38:27 compute-1 ceph-mon[9795]: 12.5 scrub ok
Oct 09 09:38:27 compute-1 ceph-mon[9795]: 3.11 scrub starts
Oct 09 09:38:27 compute-1 ceph-mon[9795]: 3.11 scrub ok
Oct 09 09:38:27 compute-1 ceph-mon[9795]: pgmap v82: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 147 B/s, 9 objects/s recovering
Oct 09 09:38:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 09 09:38:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 09 09:38:27 compute-1 ceph-mon[9795]: osdmap e72: 3 total, 3 up, 3 in
Oct 09 09:38:27 compute-1 ceph-mon[9795]: osdmap e73: 3 total, 3 up, 3 in
Oct 09 09:38:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:38:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:27.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:38:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:27 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:28 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 09 09:38:28 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 09 09:38:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:28 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:28 compute-1 sudo[20625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:38:28 compute-1 sudo[20625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:38:28 compute-1 sudo[20625]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:28 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Oct 09 09:38:28 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 74 pg[6.8( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74) [0] r=0 lpr=74 pi=[49,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:28 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 74 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74 pruub=12.809546471s) [1] r=-1 lpr=74 pi=[53,74)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 229.555313110s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:28 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 74 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74 pruub=12.809524536s) [1] r=-1 lpr=74 pi=[53,74)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.555313110s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:28 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 74 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74 pruub=12.808170319s) [1] r=-1 lpr=74 pi=[53,74)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 229.554595947s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:28 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 74 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74 pruub=12.808053970s) [1] r=-1 lpr=74 pi=[53,74)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.554595947s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:28 compute-1 ceph-mon[9795]: 11.0 scrub starts
Oct 09 09:38:28 compute-1 ceph-mon[9795]: 11.0 scrub ok
Oct 09 09:38:28 compute-1 ceph-mon[9795]: 12.1b scrub starts
Oct 09 09:38:28 compute-1 ceph-mon[9795]: 5.0 deep-scrub starts
Oct 09 09:38:28 compute-1 ceph-mon[9795]: 12.1b scrub ok
Oct 09 09:38:28 compute-1 ceph-mon[9795]: 5.0 deep-scrub ok
Oct 09 09:38:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 09 09:38:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 09 09:38:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 09 09:38:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 09 09:38:28 compute-1 ceph-mon[9795]: osdmap e74: 3 total, 3 up, 3 in
Oct 09 09:38:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:28.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:28 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c0058e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:28 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:38:29 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Oct 09 09:38:29 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Oct 09 09:38:29 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Oct 09 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=0 lpr=75 pi=[53,75)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=0 lpr=75 pi=[53,75)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=0 lpr=75 pi=[53,75)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=0 lpr=75 pi=[53,75)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=74/75 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74) [0] r=0 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=0 lpr=75 pi=[61,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=0 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:29 compute-1 ceph-mon[9795]: 9.2 scrub starts
Oct 09 09:38:29 compute-1 ceph-mon[9795]: 9.2 scrub ok
Oct 09 09:38:29 compute-1 ceph-mon[9795]: 7.15 scrub starts
Oct 09 09:38:29 compute-1 ceph-mon[9795]: 7.15 scrub ok
Oct 09 09:38:29 compute-1 ceph-mon[9795]: 5.8 scrub starts
Oct 09 09:38:29 compute-1 ceph-mon[9795]: 5.8 scrub ok
Oct 09 09:38:29 compute-1 ceph-mon[9795]: pgmap v85: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 148 B/s, 9 objects/s recovering
Oct 09 09:38:29 compute-1 ceph-mon[9795]: osdmap e75: 3 total, 3 up, 3 in
Oct 09 09:38:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:38:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:29.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:38:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:29 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc740008dc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:30 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Oct 09 09:38:30 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Oct 09 09:38:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:30 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Oct 09 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] async=[1] r=0 lpr=75 pi=[53,75)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] async=[1] r=0 lpr=75 pi=[53,75)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=0 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:30 compute-1 ceph-mon[9795]: 8.0 scrub starts
Oct 09 09:38:30 compute-1 ceph-mon[9795]: 8.0 scrub ok
Oct 09 09:38:30 compute-1 ceph-mon[9795]: 12.16 scrub starts
Oct 09 09:38:30 compute-1 ceph-mon[9795]: 12.16 scrub ok
Oct 09 09:38:30 compute-1 ceph-mon[9795]: 5.12 scrub starts
Oct 09 09:38:30 compute-1 ceph-mon[9795]: 5.12 scrub ok
Oct 09 09:38:30 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 09 09:38:30 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 09 09:38:30 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 09 09:38:30 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 09 09:38:30 compute-1 ceph-mon[9795]: osdmap e76: 3 total, 3 up, 3 in
Oct 09 09:38:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:30.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:30 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=0 lpr=76 pi=[61,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.9( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=0 lpr=76 pi=[61,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:31 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Oct 09 09:38:31 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Oct 09 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77 pruub=15.288201332s) [1] async=[1] r=-1 lpr=77 pi=[53,77)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 234.753356934s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77 pruub=15.288148880s) [1] r=-1 lpr=77 pi=[53,77)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.753356934s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77 pruub=15.287746429s) [1] async=[1] r=-1 lpr=77 pi=[53,77)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 234.753341675s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77 pruub=15.287559509s) [1] r=-1 lpr=77 pi=[53,77)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.753341675s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=-1 lpr=77 pi=[61,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=-1 lpr=77 pi=[61,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.9( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=-1 lpr=77 pi=[61,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.9( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=-1 lpr=77 pi=[61,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:31 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Oct 09 09:38:31 compute-1 ceph-mon[9795]: 9.1 scrub starts
Oct 09 09:38:31 compute-1 ceph-mon[9795]: 9.1 scrub ok
Oct 09 09:38:31 compute-1 ceph-mon[9795]: 12.15 scrub starts
Oct 09 09:38:31 compute-1 ceph-mon[9795]: 12.15 scrub ok
Oct 09 09:38:31 compute-1 ceph-mon[9795]: 5.13 scrub starts
Oct 09 09:38:31 compute-1 ceph-mon[9795]: 5.13 scrub ok
Oct 09 09:38:31 compute-1 ceph-mon[9795]: pgmap v88: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:31 compute-1 ceph-mon[9795]: osdmap e77: 3 total, 3 up, 3 in
Oct 09 09:38:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:31.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:31 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c0058e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:31 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:38:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:31 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:38:32 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Oct 09 09:38:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:32 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc740008dc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:32 compute-1 ceph-mon[9795]: 11.d deep-scrub starts
Oct 09 09:38:32 compute-1 ceph-mon[9795]: 11.d deep-scrub ok
Oct 09 09:38:32 compute-1 ceph-mon[9795]: 5.1c deep-scrub starts
Oct 09 09:38:32 compute-1 ceph-mon[9795]: 5.1c deep-scrub ok
Oct 09 09:38:32 compute-1 ceph-mon[9795]: 7.1d scrub starts
Oct 09 09:38:32 compute-1 ceph-mon[9795]: 7.1d scrub ok
Oct 09 09:38:32 compute-1 ceph-mon[9795]: osdmap e78: 3 total, 3 up, 3 in
Oct 09 09:38:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 09 09:38:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 09 09:38:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:32.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:32 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:33 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Oct 09 09:38:33 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=0 lpr=79 pi=[61,79)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:33 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=0 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:33 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=0 lpr=79 pi=[61,79)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:33 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=0 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:33 compute-1 ceph-mon[9795]: 8.e scrub starts
Oct 09 09:38:33 compute-1 ceph-mon[9795]: 8.e scrub ok
Oct 09 09:38:33 compute-1 ceph-mon[9795]: pgmap v92: 337 pgs: 2 active+remapped, 335 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.0 KiB/s wr, 2 op/s; 195 B/s, 7 objects/s recovering
Oct 09 09:38:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 09 09:38:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 09 09:38:33 compute-1 ceph-mon[9795]: osdmap e79: 3 total, 3 up, 3 in
Oct 09 09:38:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:33.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:33 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:33 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:33 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.17 deep-scrub starts
Oct 09 09:38:34 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.17 deep-scrub ok
Oct 09 09:38:34 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Oct 09 09:38:34 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=0 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:34 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=0 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:34 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c006200 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:34 compute-1 ceph-mon[9795]: 9.14 scrub starts
Oct 09 09:38:34 compute-1 ceph-mon[9795]: 9.14 scrub ok
Oct 09 09:38:34 compute-1 ceph-mon[9795]: 10.11 scrub starts
Oct 09 09:38:34 compute-1 ceph-mon[9795]: 10.11 scrub ok
Oct 09 09:38:34 compute-1 ceph-mon[9795]: osdmap e80: 3 total, 3 up, 3 in
Oct 09 09:38:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 09 09:38:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 09 09:38:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:34.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:34 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc740008dc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:34 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:38:34 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct 09 09:38:34 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct 09 09:38:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Oct 09 09:38:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 81 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.985170364s) [1] r=-1 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 41'42 active pruub 232.463363647s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 81 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.985138893s) [1] r=-1 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 232.463363647s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 81 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 81 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 81 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=0 lpr=81 pi=[60,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 81 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:35 compute-1 ceph-mon[9795]: 11.b scrub starts
Oct 09 09:38:35 compute-1 ceph-mon[9795]: 11.b scrub ok
Oct 09 09:38:35 compute-1 ceph-mon[9795]: 10.17 deep-scrub starts
Oct 09 09:38:35 compute-1 ceph-mon[9795]: 10.17 deep-scrub ok
Oct 09 09:38:35 compute-1 ceph-mon[9795]: pgmap v95: 337 pgs: 2 active+remapped, 335 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.0 KiB/s wr, 2 op/s; 195 B/s, 7 objects/s recovering
Oct 09 09:38:35 compute-1 ceph-mon[9795]: 10.13 scrub starts
Oct 09 09:38:35 compute-1 ceph-mon[9795]: 10.13 scrub ok
Oct 09 09:38:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:38:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 09 09:38:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 09 09:38:35 compute-1 ceph-mon[9795]: osdmap e81: 3 total, 3 up, 3 in
Oct 09 09:38:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:35.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:35 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:35 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:35 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct 09 09:38:35 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct 09 09:38:36 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Oct 09 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:36 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:36 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:36 compute-1 ceph-mon[9795]: 4.4 scrub starts
Oct 09 09:38:36 compute-1 ceph-mon[9795]: 4.4 scrub ok
Oct 09 09:38:36 compute-1 ceph-mon[9795]: 6.d scrub starts
Oct 09 09:38:36 compute-1 ceph-mon[9795]: 6.d scrub ok
Oct 09 09:38:36 compute-1 ceph-mon[9795]: 10.3 scrub starts
Oct 09 09:38:36 compute-1 ceph-mon[9795]: 10.3 scrub ok
Oct 09 09:38:36 compute-1 ceph-mon[9795]: osdmap e82: 3 total, 3 up, 3 in
Oct 09 09:38:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:36.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:36 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:36 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c006200 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:36 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 09 09:38:36 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 09 09:38:37 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Oct 09 09:38:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:38:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:37.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:38:37 compute-1 ceph-mon[9795]: 4.7 scrub starts
Oct 09 09:38:37 compute-1 ceph-mon[9795]: 4.7 scrub ok
Oct 09 09:38:37 compute-1 ceph-mon[9795]: 10.7 scrub starts
Oct 09 09:38:37 compute-1 ceph-mon[9795]: 10.7 scrub ok
Oct 09 09:38:37 compute-1 ceph-mon[9795]: 6.1 scrub starts
Oct 09 09:38:37 compute-1 ceph-mon[9795]: 6.1 scrub ok
Oct 09 09:38:37 compute-1 ceph-mon[9795]: pgmap v98: 337 pgs: 4 unknown, 1 peering, 332 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:37 compute-1 ceph-mon[9795]: osdmap e83: 3 total, 3 up, 3 in
Oct 09 09:38:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:37 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc74000a250 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:38 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct 09 09:38:38 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct 09 09:38:38 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Oct 09 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:38 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:38 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:38 compute-1 ceph-mon[9795]: 11.10 scrub starts
Oct 09 09:38:38 compute-1 ceph-mon[9795]: 11.10 scrub ok
Oct 09 09:38:38 compute-1 ceph-mon[9795]: 6.8 scrub starts
Oct 09 09:38:38 compute-1 ceph-mon[9795]: 6.8 scrub ok
Oct 09 09:38:38 compute-1 ceph-mon[9795]: 11.17 scrub starts
Oct 09 09:38:38 compute-1 ceph-mon[9795]: 11.17 scrub ok
Oct 09 09:38:38 compute-1 ceph-mon[9795]: osdmap e84: 3 total, 3 up, 3 in
Oct 09 09:38:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:38.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:38 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:38 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:39 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 09 09:38:39 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 09 09:38:39 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Oct 09 09:38:39 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:39 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:39 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:39 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:39.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:39 compute-1 ceph-mon[9795]: 8.13 deep-scrub starts
Oct 09 09:38:39 compute-1 ceph-mon[9795]: 8.13 deep-scrub ok
Oct 09 09:38:39 compute-1 ceph-mon[9795]: 9.10 scrub starts
Oct 09 09:38:39 compute-1 ceph-mon[9795]: 9.10 scrub ok
Oct 09 09:38:39 compute-1 ceph-mon[9795]: 12.17 scrub starts
Oct 09 09:38:39 compute-1 ceph-mon[9795]: 12.17 scrub ok
Oct 09 09:38:39 compute-1 ceph-mon[9795]: pgmap v101: 337 pgs: 4 unknown, 1 peering, 332 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:39 compute-1 ceph-mon[9795]: osdmap e85: 3 total, 3 up, 3 in
Oct 09 09:38:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:39 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c006200 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:40 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 09 09:38:40 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 09 09:38:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:40 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc74000a250 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:40 compute-1 ceph-mon[9795]: 9.0 scrub starts
Oct 09 09:38:40 compute-1 ceph-mon[9795]: 9.0 scrub ok
Oct 09 09:38:40 compute-1 ceph-mon[9795]: 11.12 scrub starts
Oct 09 09:38:40 compute-1 ceph-mon[9795]: 11.12 scrub ok
Oct 09 09:38:40 compute-1 ceph-mon[9795]: 11.16 scrub starts
Oct 09 09:38:40 compute-1 ceph-mon[9795]: 11.16 scrub ok
Oct 09 09:38:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:40.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:40 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093840 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:38:41 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 09 09:38:41 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 09 09:38:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:38:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:41.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:38:41 compute-1 ceph-mon[9795]: 8.1 scrub starts
Oct 09 09:38:41 compute-1 ceph-mon[9795]: 8.1 scrub ok
Oct 09 09:38:41 compute-1 ceph-mon[9795]: 5.1f scrub starts
Oct 09 09:38:41 compute-1 ceph-mon[9795]: 5.1f scrub ok
Oct 09 09:38:41 compute-1 ceph-mon[9795]: 4.19 scrub starts
Oct 09 09:38:41 compute-1 ceph-mon[9795]: 4.19 scrub ok
Oct 09 09:38:41 compute-1 ceph-mon[9795]: pgmap v103: 337 pgs: 4 unknown, 1 peering, 332 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:41 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:42 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 09 09:38:42 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 09 09:38:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:42 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c006200 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:42 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Oct 09 09:38:42 compute-1 ceph-mon[9795]: 11.2 deep-scrub starts
Oct 09 09:38:42 compute-1 ceph-mon[9795]: 11.2 deep-scrub ok
Oct 09 09:38:42 compute-1 ceph-mon[9795]: 9.15 scrub starts
Oct 09 09:38:42 compute-1 ceph-mon[9795]: 9.15 scrub ok
Oct 09 09:38:42 compute-1 ceph-mon[9795]: 11.13 scrub starts
Oct 09 09:38:42 compute-1 ceph-mon[9795]: 11.13 scrub ok
Oct 09 09:38:42 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 09 09:38:42 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 09 09:38:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:38:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:42.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:38:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:42 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc74c002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:43 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct 09 09:38:43 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct 09 09:38:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:43.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:43 compute-1 ceph-mon[9795]: 11.c scrub starts
Oct 09 09:38:43 compute-1 ceph-mon[9795]: 11.c scrub ok
Oct 09 09:38:43 compute-1 ceph-mon[9795]: 5.1b scrub starts
Oct 09 09:38:43 compute-1 ceph-mon[9795]: 5.1b scrub ok
Oct 09 09:38:43 compute-1 ceph-mon[9795]: 4.1c scrub starts
Oct 09 09:38:43 compute-1 ceph-mon[9795]: 4.1c scrub ok
Oct 09 09:38:43 compute-1 ceph-mon[9795]: pgmap v104: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 36 op/s; 36 B/s, 4 objects/s recovering
Oct 09 09:38:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 09 09:38:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 09 09:38:43 compute-1 ceph-mon[9795]: osdmap e86: 3 total, 3 up, 3 in
Oct 09 09:38:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:43 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:43 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct 09 09:38:44 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct 09 09:38:44 compute-1 sudo[20706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:38:44 compute-1 sudo[20706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:38:44 compute-1 sudo[20706]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:44 compute-1 sudo[20731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:38:44 compute-1 sudo[20731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:38:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:44 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:44 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Oct 09 09:38:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:44 compute-1 ceph-mon[9795]: 9.c scrub starts
Oct 09 09:38:44 compute-1 ceph-mon[9795]: 9.c scrub ok
Oct 09 09:38:44 compute-1 ceph-mon[9795]: 8.14 scrub starts
Oct 09 09:38:44 compute-1 ceph-mon[9795]: 8.14 scrub ok
Oct 09 09:38:44 compute-1 ceph-mon[9795]: 8.15 scrub starts
Oct 09 09:38:44 compute-1 ceph-mon[9795]: 8.15 scrub ok
Oct 09 09:38:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 09 09:38:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 09 09:38:44 compute-1 sudo[20731]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:44.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:44 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:44 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct 09 09:38:44 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct 09 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:45.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Oct 09 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:45 compute-1 ceph-mon[9795]: 11.9 scrub starts
Oct 09 09:38:45 compute-1 ceph-mon[9795]: 11.9 scrub ok
Oct 09 09:38:45 compute-1 ceph-mon[9795]: 5.18 scrub starts
Oct 09 09:38:45 compute-1 ceph-mon[9795]: 5.18 scrub ok
Oct 09 09:38:45 compute-1 ceph-mon[9795]: 4.1f scrub starts
Oct 09 09:38:45 compute-1 ceph-mon[9795]: 4.1f scrub ok
Oct 09 09:38:45 compute-1 ceph-mon[9795]: pgmap v106: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 35 op/s; 35 B/s, 4 objects/s recovering
Oct 09 09:38:45 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 09 09:38:45 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 09 09:38:45 compute-1 ceph-mon[9795]: osdmap e87: 3 total, 3 up, 3 in
Oct 09 09:38:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:45 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c006200 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:46 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct 09 09:38:46 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct 09 09:38:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:46 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc74c003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Oct 09 09:38:46 compute-1 ceph-mon[9795]: 9.4 scrub starts
Oct 09 09:38:46 compute-1 ceph-mon[9795]: 9.4 scrub ok
Oct 09 09:38:46 compute-1 ceph-mon[9795]: 3.1c scrub starts
Oct 09 09:38:46 compute-1 ceph-mon[9795]: 3.1c scrub ok
Oct 09 09:38:46 compute-1 ceph-mon[9795]: 9.1d scrub starts
Oct 09 09:38:46 compute-1 ceph-mon[9795]: 9.1d scrub ok
Oct 09 09:38:46 compute-1 ceph-mon[9795]: osdmap e88: 3 total, 3 up, 3 in
Oct 09 09:38:46 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:46 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:46 compute-1 ceph-mon[9795]: osdmap e89: 3 total, 3 up, 3 in
Oct 09 09:38:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:38:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:46.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:38:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:46 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:47 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Oct 09 09:38:47 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Oct 09 09:38:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:47.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Oct 09 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:47 compute-1 ceph-mon[9795]: 11.6 scrub starts
Oct 09 09:38:47 compute-1 ceph-mon[9795]: 11.6 scrub ok
Oct 09 09:38:47 compute-1 ceph-mon[9795]: 5.15 scrub starts
Oct 09 09:38:47 compute-1 ceph-mon[9795]: 5.15 scrub ok
Oct 09 09:38:47 compute-1 ceph-mon[9795]: 8.1c scrub starts
Oct 09 09:38:47 compute-1 ceph-mon[9795]: 8.1c scrub ok
Oct 09 09:38:47 compute-1 ceph-mon[9795]: pgmap v109: 337 pgs: 4 remapped+peering, 333 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 33 op/s; 36 B/s, 4 objects/s recovering
Oct 09 09:38:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:38:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:38:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:38:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:38:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:38:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:47 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:48 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 09 09:38:48 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 09 09:38:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:48 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c006200 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:48 compute-1 sudo[20811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:38:48 compute-1 sudo[20811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:38:48 compute-1 sudo[20811]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Oct 09 09:38:48 compute-1 ceph-mon[9795]: 4.b scrub starts
Oct 09 09:38:48 compute-1 ceph-mon[9795]: 4.b scrub ok
Oct 09 09:38:48 compute-1 ceph-mon[9795]: 11.1b deep-scrub starts
Oct 09 09:38:48 compute-1 ceph-mon[9795]: 11.1b deep-scrub ok
Oct 09 09:38:48 compute-1 ceph-mon[9795]: 12.18 deep-scrub starts
Oct 09 09:38:48 compute-1 ceph-mon[9795]: 12.18 deep-scrub ok
Oct 09 09:38:48 compute-1 ceph-mon[9795]: osdmap e90: 3 total, 3 up, 3 in
Oct 09 09:38:48 compute-1 ceph-mon[9795]: osdmap e91: 3 total, 3 up, 3 in
Oct 09 09:38:48 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:48 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:48 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:48 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:48.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:48 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc74c003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:49 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 09 09:38:49 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 09 09:38:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:38:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:49.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:38:49 compute-1 ceph-mon[9795]: 8.7 scrub starts
Oct 09 09:38:49 compute-1 ceph-mon[9795]: 8.7 scrub ok
Oct 09 09:38:49 compute-1 ceph-mon[9795]: 8.19 scrub starts
Oct 09 09:38:49 compute-1 ceph-mon[9795]: 8.19 scrub ok
Oct 09 09:38:49 compute-1 ceph-mon[9795]: 7.11 scrub starts
Oct 09 09:38:49 compute-1 ceph-mon[9795]: 7.11 scrub ok
Oct 09 09:38:49 compute-1 ceph-mon[9795]: pgmap v112: 337 pgs: 4 remapped+peering, 333 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:49 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:49 compute-1 sudo[20842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:38:49 compute-1 sudo[20842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:38:49 compute-1 sudo[20842]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 09 09:38:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 09 09:38:50 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:50 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:50 compute-1 ceph-mon[9795]: 11.1f scrub starts
Oct 09 09:38:50 compute-1 ceph-mon[9795]: 3.a scrub starts
Oct 09 09:38:50 compute-1 ceph-mon[9795]: 3.a scrub ok
Oct 09 09:38:50 compute-1 ceph-mon[9795]: 11.1f scrub ok
Oct 09 09:38:50 compute-1 ceph-mon[9795]: 12.1a scrub starts
Oct 09 09:38:50 compute-1 ceph-mon[9795]: 12.1a scrub ok
Oct 09 09:38:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:38:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:50.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:50 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:50 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c002850 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:51 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Oct 09 09:38:51 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Oct 09 09:38:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:51.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:51 compute-1 ceph-mon[9795]: 4.10 scrub starts
Oct 09 09:38:51 compute-1 ceph-mon[9795]: 4.10 scrub ok
Oct 09 09:38:51 compute-1 ceph-mon[9795]: 4.d scrub starts
Oct 09 09:38:51 compute-1 ceph-mon[9795]: 4.d scrub ok
Oct 09 09:38:51 compute-1 ceph-mon[9795]: 12.11 scrub starts
Oct 09 09:38:51 compute-1 ceph-mon[9795]: 12.11 scrub ok
Oct 09 09:38:51 compute-1 ceph-mon[9795]: pgmap v114: 337 pgs: 4 remapped+peering, 333 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:51 compute-1 kernel: ganesha.nfsd[20601]: segfault at 50 ip 00007fc7ec5a132e sp 00007fc7a4ff8210 error 4 in libntirpc.so.5.8[7fc7ec586000+2c000] likely on CPU 3 (core 0, socket 3)
Oct 09 09:38:51 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 09 09:38:51 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:51 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c002850 fd 49 proxy ignored for local
Oct 09 09:38:51 compute-1 systemd[1]: Started Process Core Dump (PID 20869/UID 0).
Oct 09 09:38:52 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct 09 09:38:52 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct 09 09:38:52 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Oct 09 09:38:52 compute-1 ceph-mon[9795]: 12.1c scrub starts
Oct 09 09:38:52 compute-1 ceph-mon[9795]: 12.1c scrub ok
Oct 09 09:38:52 compute-1 ceph-mon[9795]: 11.1c deep-scrub starts
Oct 09 09:38:52 compute-1 ceph-mon[9795]: 11.1c deep-scrub ok
Oct 09 09:38:52 compute-1 ceph-mon[9795]: 9.5 scrub starts
Oct 09 09:38:52 compute-1 ceph-mon[9795]: 9.5 scrub ok
Oct 09 09:38:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 09 09:38:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 09 09:38:52 compute-1 systemd-coredump[20870]: Process 19556 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 54:
                                                   #0  0x00007fc7ec5a132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Oct 09 09:38:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:52.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:52 compute-1 systemd[1]: systemd-coredump@2-20869-0.service: Deactivated successfully.
Oct 09 09:38:52 compute-1 podman[20878]: 2025-10-09 09:38:52.674243682 +0000 UTC m=+0.018281754 container died 92d8510d7f5eeffd250cb678b79fb60f427cdb1189e6d98348855aa647b7ea4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:38:52 compute-1 systemd[1]: var-lib-containers-storage-overlay-18a7a6480f3b33833b923989e2bfc3794283acd4108411bc5cfd7528ec19f604-merged.mount: Deactivated successfully.
Oct 09 09:38:52 compute-1 podman[20878]: 2025-10-09 09:38:52.690943194 +0000 UTC m=+0.034981266 container remove 92d8510d7f5eeffd250cb678b79fb60f427cdb1189e6d98348855aa647b7ea4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:38:52 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Main process exited, code=exited, status=139/n/a
Oct 09 09:38:52 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Failed with result 'exit-code'.
Oct 09 09:38:52 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:53 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Oct 09 09:38:53 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Oct 09 09:38:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:38:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:53.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:38:53 compute-1 ceph-mon[9795]: 12.12 scrub starts
Oct 09 09:38:53 compute-1 ceph-mon[9795]: 12.12 scrub ok
Oct 09 09:38:53 compute-1 ceph-mon[9795]: 11.5 scrub starts
Oct 09 09:38:53 compute-1 ceph-mon[9795]: 11.5 scrub ok
Oct 09 09:38:53 compute-1 ceph-mon[9795]: 4.8 scrub starts
Oct 09 09:38:53 compute-1 ceph-mon[9795]: 4.8 scrub ok
Oct 09 09:38:53 compute-1 ceph-mon[9795]: pgmap v115: 337 pgs: 337 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 4 objects/s recovering
Oct 09 09:38:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 09 09:38:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 09 09:38:53 compute-1 ceph-mon[9795]: osdmap e92: 3 total, 3 up, 3 in
Oct 09 09:38:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Oct 09 09:38:53 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:54 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 09 09:38:54 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 09 09:38:54 compute-1 ceph-mon[9795]: 7.e scrub starts
Oct 09 09:38:54 compute-1 ceph-mon[9795]: 7.e scrub ok
Oct 09 09:38:54 compute-1 ceph-mon[9795]: 4.a deep-scrub starts
Oct 09 09:38:54 compute-1 ceph-mon[9795]: 4.a deep-scrub ok
Oct 09 09:38:54 compute-1 ceph-mon[9795]: 7.14 scrub starts
Oct 09 09:38:54 compute-1 ceph-mon[9795]: 7.14 scrub ok
Oct 09 09:38:54 compute-1 ceph-mon[9795]: osdmap e93: 3 total, 3 up, 3 in
Oct 09 09:38:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 09 09:38:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 09 09:38:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Oct 09 09:38:54 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532474518s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 41'42 active pruub 256.462554932s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:54 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532430649s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 256.462554932s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:54 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823891640s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 active pruub 258.754333496s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:54 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823860168s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:54 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823626518s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 active pruub 258.754333496s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:54 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823608398s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:54.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:55 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct 09 09:38:55 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct 09 09:38:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:55.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:55 compute-1 ceph-mon[9795]: 12.10 scrub starts
Oct 09 09:38:55 compute-1 ceph-mon[9795]: 12.10 scrub ok
Oct 09 09:38:55 compute-1 ceph-mon[9795]: 11.1a scrub starts
Oct 09 09:38:55 compute-1 ceph-mon[9795]: 11.1a scrub ok
Oct 09 09:38:55 compute-1 ceph-mon[9795]: 7.1f scrub starts
Oct 09 09:38:55 compute-1 ceph-mon[9795]: 7.1f scrub ok
Oct 09 09:38:55 compute-1 ceph-mon[9795]: pgmap v118: 337 pgs: 337 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 4 objects/s recovering
Oct 09 09:38:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 09 09:38:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 09 09:38:55 compute-1 ceph-mon[9795]: osdmap e94: 3 total, 3 up, 3 in
Oct 09 09:38:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Oct 09 09:38:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:56 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.d deep-scrub starts
Oct 09 09:38:56 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.d deep-scrub ok
Oct 09 09:38:56 compute-1 ceph-mon[9795]: 5.1d deep-scrub starts
Oct 09 09:38:56 compute-1 ceph-mon[9795]: 5.1d deep-scrub ok
Oct 09 09:38:56 compute-1 ceph-mon[9795]: 9.a scrub starts
Oct 09 09:38:56 compute-1 ceph-mon[9795]: 9.a scrub ok
Oct 09 09:38:56 compute-1 ceph-mon[9795]: 12.3 scrub starts
Oct 09 09:38:56 compute-1 ceph-mon[9795]: 12.3 scrub ok
Oct 09 09:38:56 compute-1 ceph-mon[9795]: osdmap e95: 3 total, 3 up, 3 in
Oct 09 09:38:56 compute-1 ceph-mon[9795]: 3.12 deep-scrub starts
Oct 09 09:38:56 compute-1 ceph-mon[9795]: 3.12 deep-scrub ok
Oct 09 09:38:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Oct 09 09:38:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:56.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:57 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 09 09:38:57 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 09 09:38:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:38:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:57.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:38:57 compute-1 ceph-mon[9795]: 9.d deep-scrub starts
Oct 09 09:38:57 compute-1 ceph-mon[9795]: 9.d deep-scrub ok
Oct 09 09:38:57 compute-1 ceph-mon[9795]: 8.b scrub starts
Oct 09 09:38:57 compute-1 ceph-mon[9795]: 8.b scrub ok
Oct 09 09:38:57 compute-1 ceph-mon[9795]: pgmap v121: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:57 compute-1 ceph-mon[9795]: osdmap e96: 3 total, 3 up, 3 in
Oct 09 09:38:57 compute-1 ceph-mon[9795]: 12.19 deep-scrub starts
Oct 09 09:38:57 compute-1 ceph-mon[9795]: 12.19 deep-scrub ok
Oct 09 09:38:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Oct 09 09:38:57 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991911888s) [2] async=[2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 40'1059 active pruub 260.950622559s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:57 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991852760s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950622559s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:57 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991250038s) [2] async=[2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 40'1059 active pruub 260.950653076s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:57 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991156578s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950653076s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093857 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:38:58 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct 09 09:38:58 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct 09 09:38:58 compute-1 ceph-mon[9795]: 5.1 scrub starts
Oct 09 09:38:58 compute-1 ceph-mon[9795]: 5.1 scrub ok
Oct 09 09:38:58 compute-1 ceph-mon[9795]: 8.11 scrub starts
Oct 09 09:38:58 compute-1 ceph-mon[9795]: 8.11 scrub ok
Oct 09 09:38:58 compute-1 ceph-mon[9795]: osdmap e97: 3 total, 3 up, 3 in
Oct 09 09:38:58 compute-1 ceph-mon[9795]: 5.19 scrub starts
Oct 09 09:38:58 compute-1 ceph-mon[9795]: 5.19 scrub ok
Oct 09 09:38:58 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Oct 09 09:38:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:38:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:58.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:38:59 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 09 09:38:59 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 09 09:38:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:38:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:38:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:59.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:38:59 compute-1 ceph-mon[9795]: 3.5 scrub starts
Oct 09 09:38:59 compute-1 ceph-mon[9795]: 3.5 scrub ok
Oct 09 09:38:59 compute-1 ceph-mon[9795]: 9.13 deep-scrub starts
Oct 09 09:38:59 compute-1 ceph-mon[9795]: 9.13 deep-scrub ok
Oct 09 09:38:59 compute-1 ceph-mon[9795]: pgmap v124: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:59 compute-1 ceph-mon[9795]: osdmap e98: 3 total, 3 up, 3 in
Oct 09 09:38:59 compute-1 ceph-mon[9795]: 5.3 scrub starts
Oct 09 09:38:59 compute-1 ceph-mon[9795]: 5.3 scrub ok
Oct 09 09:39:00 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct 09 09:39:00 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct 09 09:39:00 compute-1 sudo[20497]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:00 compute-1 ceph-mon[9795]: 3.d scrub starts
Oct 09 09:39:00 compute-1 ceph-mon[9795]: 3.d scrub ok
Oct 09 09:39:00 compute-1 ceph-mon[9795]: 8.5 scrub starts
Oct 09 09:39:00 compute-1 ceph-mon[9795]: 8.5 scrub ok
Oct 09 09:39:00 compute-1 ceph-mon[9795]: 5.17 scrub starts
Oct 09 09:39:00 compute-1 ceph-mon[9795]: 5.17 scrub ok
Oct 09 09:39:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:00.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:39:00 compute-1 sudo[21065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tagvsgulqznrkzytaqpktpgssjrezsiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002740.5701008-343-75942321932275/AnsiballZ_command.py'
Oct 09 09:39:00 compute-1 sudo[21065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:00 compute-1 python3.9[21067]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:39:01 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 09 09:39:01 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 09 09:39:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:39:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:01.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:39:01 compute-1 ceph-mon[9795]: 11.7 scrub starts
Oct 09 09:39:01 compute-1 ceph-mon[9795]: 11.7 scrub ok
Oct 09 09:39:01 compute-1 ceph-mon[9795]: 8.f scrub starts
Oct 09 09:39:01 compute-1 ceph-mon[9795]: 8.f scrub ok
Oct 09 09:39:01 compute-1 ceph-mon[9795]: pgmap v126: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:01 compute-1 ceph-mon[9795]: 12.a deep-scrub starts
Oct 09 09:39:01 compute-1 ceph-mon[9795]: 12.a deep-scrub ok
Oct 09 09:39:01 compute-1 sudo[21065]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:02 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 09 09:39:02 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 09 09:39:02 compute-1 sudo[21353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqwfxpzmdbpmnjtyjhnumghrgvqrxlbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002742.010529-367-31144205369617/AnsiballZ_selinux.py'
Oct 09 09:39:02 compute-1 sudo[21353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:02 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Oct 09 09:39:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:02.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564367294s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 261.555847168s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564089775s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.555847168s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:02 compute-1 ceph-mon[9795]: 11.4 scrub starts
Oct 09 09:39:02 compute-1 ceph-mon[9795]: 11.4 scrub ok
Oct 09 09:39:02 compute-1 ceph-mon[9795]: 4.9 scrub starts
Oct 09 09:39:02 compute-1 ceph-mon[9795]: 4.9 scrub ok
Oct 09 09:39:02 compute-1 ceph-mon[9795]: 7.3 scrub starts
Oct 09 09:39:02 compute-1 ceph-mon[9795]: 7.3 scrub ok
Oct 09 09:39:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 09 09:39:02 compute-1 python3.9[21355]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 09 09:39:02 compute-1 sudo[21353]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:02 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Scheduled restart job, restart counter is at 3.
Oct 09 09:39:02 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:39:02 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:39:02 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 09 09:39:03 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 09 09:39:03 compute-1 podman[21418]: 2025-10-09 09:39:03.118937512 +0000 UTC m=+0.028360970 container create 05e3a39547eba9dcbb0c2432e2280a15f5ad4912a5a2f918b55fec8a16af71ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:39:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9545c57d508c95a16e86ca7976b27d2867fdcfd0a0e2b4874fd26f06fca97d57/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 09 09:39:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9545c57d508c95a16e86ca7976b27d2867fdcfd0a0e2b4874fd26f06fca97d57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:39:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9545c57d508c95a16e86ca7976b27d2867fdcfd0a0e2b4874fd26f06fca97d57/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:39:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9545c57d508c95a16e86ca7976b27d2867fdcfd0a0e2b4874fd26f06fca97d57/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.douegr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:39:03 compute-1 podman[21418]: 2025-10-09 09:39:03.164860077 +0000 UTC m=+0.074283555 container init 05e3a39547eba9dcbb0c2432e2280a15f5ad4912a5a2f918b55fec8a16af71ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:39:03 compute-1 podman[21418]: 2025-10-09 09:39:03.16892631 +0000 UTC m=+0.078349768 container start 05e3a39547eba9dcbb0c2432e2280a15f5ad4912a5a2f918b55fec8a16af71ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:39:03 compute-1 bash[21418]: 05e3a39547eba9dcbb0c2432e2280a15f5ad4912a5a2f918b55fec8a16af71ee
Oct 09 09:39:03 compute-1 podman[21418]: 2025-10-09 09:39:03.107402543 +0000 UTC m=+0.016826021 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:39:03 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 09 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 09 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 09 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 09 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 09 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 09 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 09 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:39:03 compute-1 sudo[21597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujeujiopobnpsouontdyvpsfbvcpmehf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002743.1245522-400-114957405561956/AnsiballZ_command.py'
Oct 09 09:39:03 compute-1 sudo[21597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:39:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:03.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:39:03 compute-1 python3.9[21599]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 09 09:39:03 compute-1 sudo[21597]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:03 compute-1 ceph-mon[9795]: 5.9 scrub starts
Oct 09 09:39:03 compute-1 ceph-mon[9795]: 5.9 scrub ok
Oct 09 09:39:03 compute-1 ceph-mon[9795]: 11.19 scrub starts
Oct 09 09:39:03 compute-1 ceph-mon[9795]: 11.19 scrub ok
Oct 09 09:39:03 compute-1 ceph-mon[9795]: pgmap v127: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 21 op/s; 212 B/s, 6 objects/s recovering
Oct 09 09:39:03 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 09 09:39:03 compute-1 ceph-mon[9795]: osdmap e99: 3 total, 3 up, 3 in
Oct 09 09:39:03 compute-1 ceph-mon[9795]: 5.14 scrub starts
Oct 09 09:39:03 compute-1 ceph-mon[9795]: 5.14 scrub ok
Oct 09 09:39:03 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Oct 09 09:39:03 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:03 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:03 compute-1 sudo[21750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfjwupypfhnywyqqjmsqulgumruovems ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002743.6510305-424-133130738910709/AnsiballZ_file.py'
Oct 09 09:39:03 compute-1 sudo[21750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:03 compute-1 python3.9[21752]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:39:03 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct 09 09:39:03 compute-1 sudo[21750]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:03 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct 09 09:39:04 compute-1 sudo[21902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmeieqwxoevmgxvlfostbqyxsaebtbwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002744.2114005-448-75523069044448/AnsiballZ_mount.py'
Oct 09 09:39:04 compute-1 sudo[21902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:39:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:04.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:39:04 compute-1 ceph-mon[9795]: 11.1 scrub starts
Oct 09 09:39:04 compute-1 ceph-mon[9795]: 11.1 scrub ok
Oct 09 09:39:04 compute-1 ceph-mon[9795]: 12.1e scrub starts
Oct 09 09:39:04 compute-1 ceph-mon[9795]: 12.1e scrub ok
Oct 09 09:39:04 compute-1 ceph-mon[9795]: osdmap e100: 3 total, 3 up, 3 in
Oct 09 09:39:04 compute-1 ceph-mon[9795]: 3.7 scrub starts
Oct 09 09:39:04 compute-1 ceph-mon[9795]: 3.7 scrub ok
Oct 09 09:39:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 09 09:39:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:39:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Oct 09 09:39:04 compute-1 python3.9[21904]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 09 09:39:04 compute-1 sudo[21902]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct 09 09:39:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct 09 09:39:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:39:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:05.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:05 compute-1 sudo[22055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tniwjxaqpvihznpybkwcqfndwiqopuwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002745.4489815-532-56435184855448/AnsiballZ_file.py'
Oct 09 09:39:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:39:05 compute-1 sudo[22055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Oct 09 09:39:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605948448s) [2] async=[2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 269.640716553s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605783463s) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.640716553s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:05 compute-1 ceph-mon[9795]: 3.3 scrub starts
Oct 09 09:39:05 compute-1 ceph-mon[9795]: 3.3 scrub ok
Oct 09 09:39:05 compute-1 ceph-mon[9795]: pgmap v130: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 21 op/s; 212 B/s, 6 objects/s recovering
Oct 09 09:39:05 compute-1 ceph-mon[9795]: 11.8 scrub starts
Oct 09 09:39:05 compute-1 ceph-mon[9795]: 11.8 scrub ok
Oct 09 09:39:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 09 09:39:05 compute-1 ceph-mon[9795]: osdmap e101: 3 total, 3 up, 3 in
Oct 09 09:39:05 compute-1 ceph-mon[9795]: 3.b scrub starts
Oct 09 09:39:05 compute-1 ceph-mon[9795]: 3.b scrub ok
Oct 09 09:39:05 compute-1 python3.9[22057]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:39:05 compute-1 sudo[22055]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 09 09:39:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 09 09:39:06 compute-1 sudo[22207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxscokodogssrbxqcdlwmgcwsyqbyenv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002746.0026135-556-179766026036334/AnsiballZ_stat.py'
Oct 09 09:39:06 compute-1 sudo[22207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:06 compute-1 python3.9[22209]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:39:06 compute-1 sudo[22207]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:06 compute-1 sudo[22285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adtzioxifjrtlzpobdlndsivzlkvekaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002746.0026135-556-179766026036334/AnsiballZ_file.py'
Oct 09 09:39:06 compute-1 sudo[22285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:39:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:06.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:39:06 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Oct 09 09:39:06 compute-1 ceph-mon[9795]: 9.e scrub starts
Oct 09 09:39:06 compute-1 ceph-mon[9795]: 9.e scrub ok
Oct 09 09:39:06 compute-1 ceph-mon[9795]: 9.16 scrub starts
Oct 09 09:39:06 compute-1 ceph-mon[9795]: 9.16 scrub ok
Oct 09 09:39:06 compute-1 ceph-mon[9795]: osdmap e102: 3 total, 3 up, 3 in
Oct 09 09:39:06 compute-1 ceph-mon[9795]: 5.6 scrub starts
Oct 09 09:39:06 compute-1 ceph-mon[9795]: 5.6 scrub ok
Oct 09 09:39:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 09 09:39:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 09 09:39:06 compute-1 ceph-mon[9795]: osdmap e103: 3 total, 3 up, 3 in
Oct 09 09:39:06 compute-1 python3.9[22287]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:39:06 compute-1 sudo[22285]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct 09 09:39:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct 09 09:39:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:07.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:07 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Oct 09 09:39:07 compute-1 ceph-mon[9795]: 5.2 scrub starts
Oct 09 09:39:07 compute-1 ceph-mon[9795]: 5.2 scrub ok
Oct 09 09:39:07 compute-1 ceph-mon[9795]: pgmap v133: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 09 09:39:07 compute-1 ceph-mon[9795]: 12.13 scrub starts
Oct 09 09:39:07 compute-1 ceph-mon[9795]: 12.13 scrub ok
Oct 09 09:39:07 compute-1 ceph-mon[9795]: 7.4 scrub starts
Oct 09 09:39:07 compute-1 ceph-mon[9795]: 7.4 scrub ok
Oct 09 09:39:07 compute-1 ceph-mon[9795]: osdmap e104: 3 total, 3 up, 3 in
Oct 09 09:39:07 compute-1 sudo[22438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otmxdzhdudsvfjqvvemvxlsxhtgsahvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002747.5812912-628-261419884388442/AnsiballZ_getent.py'
Oct 09 09:39:07 compute-1 sudo[22438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:08 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Oct 09 09:39:08 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Oct 09 09:39:08 compute-1 python3.9[22440]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 09 09:39:08 compute-1 sudo[22438]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:08 compute-1 sudo[22507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:39:08 compute-1 sudo[22507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:39:08 compute-1 sudo[22507]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:08 compute-1 sudo[22616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqvgjjfqnpnwvnazokwwbqhrllqcpdrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002748.3633335-658-278251269732927/AnsiballZ_getent.py'
Oct 09 09:39:08 compute-1 sudo[22616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:08.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:08 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Oct 09 09:39:08 compute-1 ceph-mon[9795]: 11.f scrub starts
Oct 09 09:39:08 compute-1 ceph-mon[9795]: 11.f scrub ok
Oct 09 09:39:08 compute-1 ceph-mon[9795]: 4.1 deep-scrub starts
Oct 09 09:39:08 compute-1 ceph-mon[9795]: 4.1 deep-scrub ok
Oct 09 09:39:08 compute-1 ceph-mon[9795]: 7.9 scrub starts
Oct 09 09:39:08 compute-1 ceph-mon[9795]: 7.9 scrub ok
Oct 09 09:39:08 compute-1 ceph-mon[9795]: 5.7 deep-scrub starts
Oct 09 09:39:08 compute-1 ceph-mon[9795]: 5.7 deep-scrub ok
Oct 09 09:39:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 09 09:39:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 09 09:39:08 compute-1 ceph-mon[9795]: osdmap e105: 3 total, 3 up, 3 in
Oct 09 09:39:08 compute-1 python3.9[22618]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 09 09:39:08 compute-1 sudo[22616]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:09 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 09 09:39:09 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 09 09:39:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:09 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:39:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:09 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:39:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:39:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:09.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:39:09 compute-1 sudo[22770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmytqbeymormfokyixckgiczazpygxrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002748.9627216-682-18610672339468/AnsiballZ_group.py'
Oct 09 09:39:09 compute-1 sudo[22770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:09 compute-1 python3.9[22772]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 09 09:39:09 compute-1 sudo[22770]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:09 compute-1 ceph-mon[9795]: pgmap v136: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 09 09:39:09 compute-1 ceph-mon[9795]: 7.5 scrub starts
Oct 09 09:39:09 compute-1 ceph-mon[9795]: 7.5 scrub ok
Oct 09 09:39:09 compute-1 ceph-mon[9795]: 5.5 scrub starts
Oct 09 09:39:09 compute-1 ceph-mon[9795]: 5.5 scrub ok
Oct 09 09:39:09 compute-1 ceph-mon[9795]: 5.16 scrub starts
Oct 09 09:39:09 compute-1 ceph-mon[9795]: 5.16 scrub ok
Oct 09 09:39:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Oct 09 09:39:10 compute-1 sudo[22922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rthzfhgnpmlbazswnewsvltrapdwdely ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002749.838722-709-113852240967528/AnsiballZ_file.py'
Oct 09 09:39:10 compute-1 sudo[22922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:10 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct 09 09:39:10 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct 09 09:39:10 compute-1 python3.9[22924]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 09 09:39:10 compute-1 sudo[22922]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:10.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:39:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Oct 09 09:39:10 compute-1 ceph-mon[9795]: 4.6 deep-scrub starts
Oct 09 09:39:10 compute-1 ceph-mon[9795]: 4.6 deep-scrub ok
Oct 09 09:39:10 compute-1 ceph-mon[9795]: osdmap e106: 3 total, 3 up, 3 in
Oct 09 09:39:10 compute-1 ceph-mon[9795]: 5.1e scrub starts
Oct 09 09:39:10 compute-1 ceph-mon[9795]: 5.1e scrub ok
Oct 09 09:39:10 compute-1 ceph-mon[9795]: 8.8 scrub starts
Oct 09 09:39:10 compute-1 ceph-mon[9795]: 8.8 scrub ok
Oct 09 09:39:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 09 09:39:10 compute-1 sudo[23074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gadlalfiseozfgvtrvsxaoqltkwlvbac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002750.5851169-742-89478934477379/AnsiballZ_dnf.py'
Oct 09 09:39:10 compute-1 sudo[23074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:10 compute-1 python3.9[23076]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:39:11 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Oct 09 09:39:11 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Oct 09 09:39:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Oct 09 09:39:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:39:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:11.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:39:11 compute-1 ceph-mon[9795]: 11.a scrub starts
Oct 09 09:39:11 compute-1 ceph-mon[9795]: 11.a scrub ok
Oct 09 09:39:11 compute-1 ceph-mon[9795]: pgmap v139: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 09 09:39:11 compute-1 ceph-mon[9795]: osdmap e107: 3 total, 3 up, 3 in
Oct 09 09:39:11 compute-1 ceph-mon[9795]: 7.b scrub starts
Oct 09 09:39:11 compute-1 ceph-mon[9795]: 7.b scrub ok
Oct 09 09:39:11 compute-1 ceph-mon[9795]: 8.4 deep-scrub starts
Oct 09 09:39:11 compute-1 ceph-mon[9795]: 8.4 deep-scrub ok
Oct 09 09:39:11 compute-1 ceph-mon[9795]: osdmap e108: 3 total, 3 up, 3 in
Oct 09 09:39:11 compute-1 ceph-mon[9795]: 9.b scrub starts
Oct 09 09:39:11 compute-1 ceph-mon[9795]: 9.b scrub ok
Oct 09 09:39:11 compute-1 sudo[23074]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:12 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct 09 09:39:12 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct 09 09:39:12 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Oct 09 09:39:12 compute-1 sudo[23228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yitadddnaozglzovbfnjuklnzdohhfuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002752.1510909-766-167250824692089/AnsiballZ_file.py'
Oct 09 09:39:12 compute-1 sudo[23228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:12 compute-1 python3.9[23230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:39:12 compute-1 sudo[23228]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:39:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:12.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:39:12 compute-1 sudo[23380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khvqeqpgjagtgfwjbpvjchvkecfcovfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002752.6591148-790-44090402417855/AnsiballZ_stat.py'
Oct 09 09:39:12 compute-1 sudo[23380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:13 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 09 09:39:13 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 09 09:39:13 compute-1 python3.9[23382]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:39:13 compute-1 sudo[23380]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:13 compute-1 ceph-mon[9795]: 12.8 scrub starts
Oct 09 09:39:13 compute-1 ceph-mon[9795]: 12.8 scrub ok
Oct 09 09:39:13 compute-1 ceph-mon[9795]: 11.1d scrub starts
Oct 09 09:39:13 compute-1 ceph-mon[9795]: 11.1d scrub ok
Oct 09 09:39:13 compute-1 ceph-mon[9795]: osdmap e109: 3 total, 3 up, 3 in
Oct 09 09:39:13 compute-1 ceph-mon[9795]: 9.8 scrub starts
Oct 09 09:39:13 compute-1 ceph-mon[9795]: 9.8 scrub ok
Oct 09 09:39:13 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Oct 09 09:39:13 compute-1 sudo[23458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-resserudyrkvwfalimyxcwbuqtqngoxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002752.6591148-790-44090402417855/AnsiballZ_file.py'
Oct 09 09:39:13 compute-1 sudo[23458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:13 compute-1 python3.9[23460]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:39:13 compute-1 sudo[23458]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:13.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:13 compute-1 sudo[23611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsseltnnqwkxpqmktsgnudvyzerqhybz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002753.6194198-829-133910522519130/AnsiballZ_stat.py'
Oct 09 09:39:13 compute-1 sudo[23611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:13 compute-1 python3.9[23613]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:39:14 compute-1 sudo[23611]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:14 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.f deep-scrub starts
Oct 09 09:39:14 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.f deep-scrub ok
Oct 09 09:39:14 compute-1 ceph-mon[9795]: pgmap v143: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 2.2 KiB/s wr, 6 op/s; 30 B/s, 1 objects/s recovering
Oct 09 09:39:14 compute-1 ceph-mon[9795]: 7.2 scrub starts
Oct 09 09:39:14 compute-1 ceph-mon[9795]: 7.2 scrub ok
Oct 09 09:39:14 compute-1 ceph-mon[9795]: 3.10 scrub starts
Oct 09 09:39:14 compute-1 ceph-mon[9795]: 3.10 scrub ok
Oct 09 09:39:14 compute-1 ceph-mon[9795]: osdmap e110: 3 total, 3 up, 3 in
Oct 09 09:39:14 compute-1 ceph-mon[9795]: 9.17 scrub starts
Oct 09 09:39:14 compute-1 ceph-mon[9795]: 9.17 scrub ok
Oct 09 09:39:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Oct 09 09:39:14 compute-1 sudo[23689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axwghubyppdxtczbyasglsxhjifawola ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002753.6194198-829-133910522519130/AnsiballZ_file.py'
Oct 09 09:39:14 compute-1 sudo[23689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:14 compute-1 python3.9[23691]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:39:14 compute-1 sudo[23689]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:14.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:14 compute-1 sudo[23841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wadwovnicceojhmqdnadggchmhovaqmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002754.7365587-874-272238421740474/AnsiballZ_dnf.py'
Oct 09 09:39:14 compute-1 sudo[23841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:15 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 09 09:39:15 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 09 09:39:15 compute-1 ceph-mon[9795]: 3.6 deep-scrub starts
Oct 09 09:39:15 compute-1 ceph-mon[9795]: 3.6 deep-scrub ok
Oct 09 09:39:15 compute-1 ceph-mon[9795]: 5.f deep-scrub starts
Oct 09 09:39:15 compute-1 ceph-mon[9795]: 5.f deep-scrub ok
Oct 09 09:39:15 compute-1 ceph-mon[9795]: osdmap e111: 3 total, 3 up, 3 in
Oct 09 09:39:15 compute-1 ceph-mon[9795]: 8.9 deep-scrub starts
Oct 09 09:39:15 compute-1 ceph-mon[9795]: 8.9 deep-scrub ok
Oct 09 09:39:15 compute-1 python3.9[23843]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:39:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:15.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4c18000df0 fd 38 proxy ignored for local
Oct 09 09:39:15 compute-1 kernel: ganesha.nfsd[23846]: segfault at 50 ip 00007f4cc4fc932e sp 00007f4c94ff8210 error 4 in libntirpc.so.5.8[7f4cc4fae000+2c000] likely on CPU 1 (core 0, socket 1)
Oct 09 09:39:15 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 09 09:39:15 compute-1 systemd[1]: Started Process Core Dump (PID 23862/UID 0).
Oct 09 09:39:16 compute-1 sudo[23841]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:16 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct 09 09:39:16 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct 09 09:39:16 compute-1 ceph-mon[9795]: pgmap v146: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:16 compute-1 ceph-mon[9795]: 12.c scrub starts
Oct 09 09:39:16 compute-1 ceph-mon[9795]: 12.c scrub ok
Oct 09 09:39:16 compute-1 ceph-mon[9795]: 5.10 scrub starts
Oct 09 09:39:16 compute-1 ceph-mon[9795]: 5.10 scrub ok
Oct 09 09:39:16 compute-1 ceph-mon[9795]: 12.4 scrub starts
Oct 09 09:39:16 compute-1 ceph-mon[9795]: 12.4 scrub ok
Oct 09 09:39:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:16.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:16 compute-1 systemd-coredump[23863]: Process 21486 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 41:
                                                   #0  0x00007f4cc4fc932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Oct 09 09:39:16 compute-1 systemd[1]: systemd-coredump@3-23862-0.service: Deactivated successfully.
Oct 09 09:39:16 compute-1 podman[23983]: 2025-10-09 09:39:16.712764862 +0000 UTC m=+0.017558139 container died 05e3a39547eba9dcbb0c2432e2280a15f5ad4912a5a2f918b55fec8a16af71ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:39:16 compute-1 systemd[1]: var-lib-containers-storage-overlay-9545c57d508c95a16e86ca7976b27d2867fdcfd0a0e2b4874fd26f06fca97d57-merged.mount: Deactivated successfully.
Oct 09 09:39:16 compute-1 podman[23983]: 2025-10-09 09:39:16.737792279 +0000 UTC m=+0.042585546 container remove 05e3a39547eba9dcbb0c2432e2280a15f5ad4912a5a2f918b55fec8a16af71ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:39:16 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Main process exited, code=exited, status=139/n/a
Oct 09 09:39:16 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Failed with result 'exit-code'.
Oct 09 09:39:16 compute-1 python3.9[24032]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:39:17 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 09 09:39:17 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 09 09:39:17 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Oct 09 09:39:17 compute-1 ceph-mon[9795]: 3.2 deep-scrub starts
Oct 09 09:39:17 compute-1 ceph-mon[9795]: 3.2 deep-scrub ok
Oct 09 09:39:17 compute-1 ceph-mon[9795]: 4.e scrub starts
Oct 09 09:39:17 compute-1 ceph-mon[9795]: 4.e scrub ok
Oct 09 09:39:17 compute-1 ceph-mon[9795]: 9.7 scrub starts
Oct 09 09:39:17 compute-1 ceph-mon[9795]: 9.7 scrub ok
Oct 09 09:39:17 compute-1 ceph-mon[9795]: pgmap v147: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 1 objects/s recovering
Oct 09 09:39:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 09 09:39:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:39:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:17.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:39:17 compute-1 python3.9[24205]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 09 09:39:17 compute-1 python3.9[24355]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:39:18 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 09 09:39:18 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 09 09:39:18 compute-1 ceph-mon[9795]: 3.1 scrub starts
Oct 09 09:39:18 compute-1 ceph-mon[9795]: 3.1 scrub ok
Oct 09 09:39:18 compute-1 ceph-mon[9795]: 4.5 scrub starts
Oct 09 09:39:18 compute-1 ceph-mon[9795]: 4.5 scrub ok
Oct 09 09:39:18 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 09 09:39:18 compute-1 ceph-mon[9795]: osdmap e112: 3 total, 3 up, 3 in
Oct 09 09:39:18 compute-1 ceph-mon[9795]: 4.15 scrub starts
Oct 09 09:39:18 compute-1 ceph-mon[9795]: 4.15 scrub ok
Oct 09 09:39:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:18.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:18 compute-1 sudo[24505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dokscfejqmfklowjyyxziwmkntzplpup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002758.4361408-997-262466484868488/AnsiballZ_systemd.py'
Oct 09 09:39:18 compute-1 sudo[24505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:19 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 09 09:39:19 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 09 09:39:19 compute-1 ceph-mon[9795]: 7.6 scrub starts
Oct 09 09:39:19 compute-1 ceph-mon[9795]: 7.6 scrub ok
Oct 09 09:39:19 compute-1 ceph-mon[9795]: 9.11 scrub starts
Oct 09 09:39:19 compute-1 ceph-mon[9795]: 9.11 scrub ok
Oct 09 09:39:19 compute-1 ceph-mon[9795]: 12.2 scrub starts
Oct 09 09:39:19 compute-1 ceph-mon[9795]: 12.2 scrub ok
Oct 09 09:39:19 compute-1 ceph-mon[9795]: pgmap v149: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct 09 09:39:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 09 09:39:19 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Oct 09 09:39:19 compute-1 python3.9[24507]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:39:19 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 09 09:39:19 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Oct 09 09:39:19 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 09 09:39:19 compute-1 systemd[1]: tuned.service: Consumed 240ms CPU time, 19.1M memory peak, read 4.0M from disk, written 16.0K to disk.
Oct 09 09:39:19 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 09 09:39:19 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 09 09:39:19 compute-1 sudo[24505]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:39:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:19.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:39:19 compute-1 python3.9[24669]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 09 09:39:20 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Oct 09 09:39:20 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Oct 09 09:39:20 compute-1 ceph-mon[9795]: 12.b scrub starts
Oct 09 09:39:20 compute-1 ceph-mon[9795]: 12.b scrub ok
Oct 09 09:39:20 compute-1 ceph-mon[9795]: 5.11 scrub starts
Oct 09 09:39:20 compute-1 ceph-mon[9795]: 5.11 scrub ok
Oct 09 09:39:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 09 09:39:20 compute-1 ceph-mon[9795]: osdmap e113: 3 total, 3 up, 3 in
Oct 09 09:39:20 compute-1 ceph-mon[9795]: 4.2 scrub starts
Oct 09 09:39:20 compute-1 ceph-mon[9795]: 4.2 scrub ok
Oct 09 09:39:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:39:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:39:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:20.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:39:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:39:21 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 09 09:39:21 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Oct 09 09:39:21 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 09 09:39:21 compute-1 ceph-mon[9795]: 12.e deep-scrub starts
Oct 09 09:39:21 compute-1 ceph-mon[9795]: 12.e deep-scrub ok
Oct 09 09:39:21 compute-1 ceph-mon[9795]: 11.1e deep-scrub starts
Oct 09 09:39:21 compute-1 ceph-mon[9795]: 11.1e deep-scrub ok
Oct 09 09:39:21 compute-1 ceph-mon[9795]: 9.18 scrub starts
Oct 09 09:39:21 compute-1 ceph-mon[9795]: 9.18 scrub ok
Oct 09 09:39:21 compute-1 ceph-mon[9795]: pgmap v151: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 1 objects/s recovering
Oct 09 09:39:21 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 09 09:39:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:21.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:22 compute-1 ceph-mon[9795]: 12.6 scrub starts
Oct 09 09:39:22 compute-1 ceph-mon[9795]: 12.6 scrub ok
Oct 09 09:39:22 compute-1 ceph-mon[9795]: 9.12 scrub starts
Oct 09 09:39:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 09 09:39:22 compute-1 ceph-mon[9795]: osdmap e114: 3 total, 3 up, 3 in
Oct 09 09:39:22 compute-1 ceph-mon[9795]: 9.12 scrub ok
Oct 09 09:39:22 compute-1 ceph-mon[9795]: 12.1d scrub starts
Oct 09 09:39:22 compute-1 ceph-mon[9795]: 12.1d scrub ok
Oct 09 09:39:22 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Oct 09 09:39:22 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Oct 09 09:39:22 compute-1 sudo[24820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unpsloyyxvlibcbpdhwarwtklhvzwoaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002762.301208-1168-244376561685794/AnsiballZ_systemd.py'
Oct 09 09:39:22 compute-1 sudo[24820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:39:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:22.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:39:22 compute-1 python3.9[24822]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:39:22 compute-1 sudo[24820]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:23 compute-1 ceph-mon[9795]: 5.a deep-scrub starts
Oct 09 09:39:23 compute-1 ceph-mon[9795]: 5.a deep-scrub ok
Oct 09 09:39:23 compute-1 ceph-mon[9795]: 8.12 deep-scrub starts
Oct 09 09:39:23 compute-1 ceph-mon[9795]: 8.12 deep-scrub ok
Oct 09 09:39:23 compute-1 ceph-mon[9795]: 8.c scrub starts
Oct 09 09:39:23 compute-1 ceph-mon[9795]: 8.c scrub ok
Oct 09 09:39:23 compute-1 ceph-mon[9795]: pgmap v153: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:23 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 09 09:39:23 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Oct 09 09:39:23 compute-1 sudo[24974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhpqnjnqbuxejxnizbkbctkrdxhcrwkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002762.9574137-1168-135412515087168/AnsiballZ_systemd.py'
Oct 09 09:39:23 compute-1 sudo[24974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:23 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct 09 09:39:23 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct 09 09:39:23 compute-1 python3.9[24976]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:39:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:39:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:23.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:39:23 compute-1 sudo[24974]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:23 compute-1 sshd-session[18633]: Connection closed by 192.168.122.30 port 35844
Oct 09 09:39:23 compute-1 sshd-session[18630]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:39:23 compute-1 systemd[1]: session-22.scope: Deactivated successfully.
Oct 09 09:39:23 compute-1 systemd[1]: session-22.scope: Consumed 47.163s CPU time.
Oct 09 09:39:23 compute-1 systemd-logind[798]: Session 22 logged out. Waiting for processes to exit.
Oct 09 09:39:23 compute-1 systemd-logind[798]: Removed session 22.
Oct 09 09:39:24 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 09 09:39:24 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 09 09:39:24 compute-1 ceph-mon[9795]: 7.8 scrub starts
Oct 09 09:39:24 compute-1 ceph-mon[9795]: 7.8 scrub ok
Oct 09 09:39:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 09 09:39:24 compute-1 ceph-mon[9795]: osdmap e115: 3 total, 3 up, 3 in
Oct 09 09:39:24 compute-1 ceph-mon[9795]: 11.14 scrub starts
Oct 09 09:39:24 compute-1 ceph-mon[9795]: 11.14 scrub ok
Oct 09 09:39:24 compute-1 ceph-mon[9795]: 8.2 scrub starts
Oct 09 09:39:24 compute-1 ceph-mon[9795]: 8.2 scrub ok
Oct 09 09:39:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:24.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:25 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct 09 09:39:25 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct 09 09:39:25 compute-1 ceph-mon[9795]: 7.13 scrub starts
Oct 09 09:39:25 compute-1 ceph-mon[9795]: 7.13 scrub ok
Oct 09 09:39:25 compute-1 ceph-mon[9795]: 8.18 scrub starts
Oct 09 09:39:25 compute-1 ceph-mon[9795]: 8.18 scrub ok
Oct 09 09:39:25 compute-1 ceph-mon[9795]: 9.3 scrub starts
Oct 09 09:39:25 compute-1 ceph-mon[9795]: pgmap v155: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:25 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 09 09:39:25 compute-1 ceph-mon[9795]: 9.3 scrub ok
Oct 09 09:39:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Oct 09 09:39:25 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888633728s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 active pruub 286.476196289s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:25 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888607979s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 286.476196289s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:25.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:26 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Oct 09 09:39:26 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:26 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:26 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Oct 09 09:39:26 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Oct 09 09:39:26 compute-1 ceph-mon[9795]: 5.c scrub starts
Oct 09 09:39:26 compute-1 ceph-mon[9795]: 5.c scrub ok
Oct 09 09:39:26 compute-1 ceph-mon[9795]: 9.6 scrub starts
Oct 09 09:39:26 compute-1 ceph-mon[9795]: 9.6 scrub ok
Oct 09 09:39:26 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 09 09:39:26 compute-1 ceph-mon[9795]: osdmap e116: 3 total, 3 up, 3 in
Oct 09 09:39:26 compute-1 ceph-mon[9795]: 9.9 scrub starts
Oct 09 09:39:26 compute-1 ceph-mon[9795]: 9.9 scrub ok
Oct 09 09:39:26 compute-1 ceph-mon[9795]: osdmap e117: 3 total, 3 up, 3 in
Oct 09 09:39:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:26.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:26 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Scheduled restart job, restart counter is at 4.
Oct 09 09:39:26 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:39:26 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:39:27 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Oct 09 09:39:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:39:27 compute-1 podman[25043]: 2025-10-09 09:39:27.124204401 +0000 UTC m=+0.027310940 container create a5e3b34a367621b8a1cb9a528a0439507bd5268852e1dadd9bb23099180a3d9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:39:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab757a1a3ce8061dfdc6015faefa57dc0a0913c0474d9fbdd1b038c8af3de70/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 09 09:39:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab757a1a3ce8061dfdc6015faefa57dc0a0913c0474d9fbdd1b038c8af3de70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:39:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab757a1a3ce8061dfdc6015faefa57dc0a0913c0474d9fbdd1b038c8af3de70/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:39:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab757a1a3ce8061dfdc6015faefa57dc0a0913c0474d9fbdd1b038c8af3de70/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.douegr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:39:27 compute-1 podman[25043]: 2025-10-09 09:39:27.164198052 +0000 UTC m=+0.067304611 container init a5e3b34a367621b8a1cb9a528a0439507bd5268852e1dadd9bb23099180a3d9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:39:27 compute-1 podman[25043]: 2025-10-09 09:39:27.169678601 +0000 UTC m=+0.072785140 container start a5e3b34a367621b8a1cb9a528a0439507bd5268852e1dadd9bb23099180a3d9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 09 09:39:27 compute-1 bash[25043]: a5e3b34a367621b8a1cb9a528a0439507bd5268852e1dadd9bb23099180a3d9d
Oct 09 09:39:27 compute-1 podman[25043]: 2025-10-09 09:39:27.112963788 +0000 UTC m=+0.016070347 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:39:27 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 09 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 09 09:39:27 compute-1 ceph-mon[9795]: 6.0 scrub starts
Oct 09 09:39:27 compute-1 ceph-mon[9795]: 6.0 scrub ok
Oct 09 09:39:27 compute-1 ceph-mon[9795]: 8.17 scrub starts
Oct 09 09:39:27 compute-1 ceph-mon[9795]: 8.17 scrub ok
Oct 09 09:39:27 compute-1 ceph-mon[9795]: pgmap v158: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:27 compute-1 ceph-mon[9795]: 8.3 scrub starts
Oct 09 09:39:27 compute-1 ceph-mon[9795]: 8.3 scrub ok
Oct 09 09:39:27 compute-1 ceph-mon[9795]: osdmap e118: 3 total, 3 up, 3 in
Oct 09 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 09 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 09 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 09 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 09 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 09 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:39:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:27.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:28 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Oct 09 09:39:28 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996621132s) [1] async=[1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 40'1059 active pruub 291.480834961s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:28 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996500015s) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 291.480834961s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:28 compute-1 ceph-mon[9795]: 6.6 scrub starts
Oct 09 09:39:28 compute-1 ceph-mon[9795]: 6.6 scrub ok
Oct 09 09:39:28 compute-1 ceph-mon[9795]: 12.7 scrub starts
Oct 09 09:39:28 compute-1 ceph-mon[9795]: 12.7 scrub ok
Oct 09 09:39:28 compute-1 ceph-mon[9795]: osdmap e119: 3 total, 3 up, 3 in
Oct 09 09:39:28 compute-1 sudo[25098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:39:28 compute-1 sudo[25098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:39:28 compute-1 sudo[25098]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:28.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:28 compute-1 sshd-session[25123]: Accepted publickey for zuul from 192.168.122.30 port 33116 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:39:28 compute-1 systemd-logind[798]: New session 23 of user zuul.
Oct 09 09:39:28 compute-1 systemd[1]: Started Session 23 of User zuul.
Oct 09 09:39:28 compute-1 sshd-session[25123]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:39:29 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Oct 09 09:39:29 compute-1 ceph-mon[9795]: 6.4 scrub starts
Oct 09 09:39:29 compute-1 ceph-mon[9795]: 6.4 scrub ok
Oct 09 09:39:29 compute-1 ceph-mon[9795]: pgmap v161: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:29 compute-1 ceph-mon[9795]: 4.3 scrub starts
Oct 09 09:39:29 compute-1 ceph-mon[9795]: 4.3 scrub ok
Oct 09 09:39:29 compute-1 ceph-mon[9795]: osdmap e120: 3 total, 3 up, 3 in
Oct 09 09:39:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:39:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:29.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:39:29 compute-1 python3.9[25277]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:39:30 compute-1 ceph-mon[9795]: 6.b deep-scrub starts
Oct 09 09:39:30 compute-1 ceph-mon[9795]: 6.b deep-scrub ok
Oct 09 09:39:30 compute-1 ceph-mon[9795]: 8.d deep-scrub starts
Oct 09 09:39:30 compute-1 ceph-mon[9795]: 8.d deep-scrub ok
Oct 09 09:39:30 compute-1 sudo[25431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxqekxfrxeqeklswjoqfemxbgvveoway ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002770.1336863-69-147677338051893/AnsiballZ_getent.py'
Oct 09 09:39:30 compute-1 sudo[25431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:30 compute-1 python3.9[25433]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 09 09:39:30 compute-1 sudo[25431]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:30.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:31 compute-1 sudo[25584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gigwndxohnhxyyttvqngsvsozlvijgrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002770.8814085-105-275977017032772/AnsiballZ_setup.py'
Oct 09 09:39:31 compute-1 sudo[25584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:31 compute-1 ceph-mon[9795]: 6.9 scrub starts
Oct 09 09:39:31 compute-1 ceph-mon[9795]: 6.9 scrub ok
Oct 09 09:39:31 compute-1 ceph-mon[9795]: pgmap v163: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:31 compute-1 ceph-mon[9795]: 8.1f deep-scrub starts
Oct 09 09:39:31 compute-1 ceph-mon[9795]: 8.1f deep-scrub ok
Oct 09 09:39:31 compute-1 python3.9[25586]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:39:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:39:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:31.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:39:31 compute-1 sudo[25584]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:31 compute-1 sudo[25669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkumaclowgkbzuabqsphljaettrvtuye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002770.8814085-105-275977017032772/AnsiballZ_dnf.py'
Oct 09 09:39:31 compute-1 sudo[25669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:31 compute-1 python3.9[25671]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 09 09:39:32 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 09 09:39:32 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 09 09:39:32 compute-1 ceph-mon[9795]: 6.c deep-scrub starts
Oct 09 09:39:32 compute-1 ceph-mon[9795]: 6.c deep-scrub ok
Oct 09 09:39:32 compute-1 ceph-mon[9795]: 8.16 scrub starts
Oct 09 09:39:32 compute-1 ceph-mon[9795]: 8.16 scrub ok
Oct 09 09:39:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:32.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:32 compute-1 sudo[25669]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:33 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct 09 09:39:33 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct 09 09:39:33 compute-1 ceph-mon[9795]: 10.15 scrub starts
Oct 09 09:39:33 compute-1 ceph-mon[9795]: 10.15 scrub ok
Oct 09 09:39:33 compute-1 ceph-mon[9795]: 8.1b scrub starts
Oct 09 09:39:33 compute-1 ceph-mon[9795]: 8.1b scrub ok
Oct 09 09:39:33 compute-1 ceph-mon[9795]: 11.3 scrub starts
Oct 09 09:39:33 compute-1 ceph-mon[9795]: 11.3 scrub ok
Oct 09 09:39:33 compute-1 ceph-mon[9795]: pgmap v164: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 9 op/s; 54 B/s, 2 objects/s recovering
Oct 09 09:39:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 09 09:39:33 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Oct 09 09:39:33 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:33 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:39:33 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:33 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:39:33 compute-1 sudo[25823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjydtahyydywvvpzsykjhwujscuaburh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002773.1660023-147-206575638065931/AnsiballZ_dnf.py'
Oct 09 09:39:33 compute-1 sudo[25823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:39:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:33.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:39:33 compute-1 python3.9[25825]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:39:34 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct 09 09:39:34 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct 09 09:39:34 compute-1 ceph-mon[9795]: 10.18 scrub starts
Oct 09 09:39:34 compute-1 ceph-mon[9795]: 10.18 scrub ok
Oct 09 09:39:34 compute-1 ceph-mon[9795]: 8.10 scrub starts
Oct 09 09:39:34 compute-1 ceph-mon[9795]: 8.10 scrub ok
Oct 09 09:39:34 compute-1 ceph-mon[9795]: 8.6 scrub starts
Oct 09 09:39:34 compute-1 ceph-mon[9795]: 8.6 scrub ok
Oct 09 09:39:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 09 09:39:34 compute-1 ceph-mon[9795]: osdmap e121: 3 total, 3 up, 3 in
Oct 09 09:39:34 compute-1 sudo[25823]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:34.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:35 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 09 09:39:35 compute-1 sudo[25976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbossddjqghvtfujdseycfcysiqickye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002774.6599228-171-281118560556169/AnsiballZ_systemd.py'
Oct 09 09:39:35 compute-1 sudo[25976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:35 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 09 09:39:35 compute-1 ceph-mon[9795]: 10.19 scrub starts
Oct 09 09:39:35 compute-1 ceph-mon[9795]: 10.19 scrub ok
Oct 09 09:39:35 compute-1 ceph-mon[9795]: 9.f scrub starts
Oct 09 09:39:35 compute-1 ceph-mon[9795]: 9.f scrub ok
Oct 09 09:39:35 compute-1 ceph-mon[9795]: 10.1f deep-scrub starts
Oct 09 09:39:35 compute-1 ceph-mon[9795]: 10.1f deep-scrub ok
Oct 09 09:39:35 compute-1 ceph-mon[9795]: pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 9 op/s; 53 B/s, 2 objects/s recovering
Oct 09 09:39:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 09 09:39:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:39:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Oct 09 09:39:35 compute-1 python3.9[25978]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:39:35 compute-1 sudo[25976]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:39:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:35.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:39:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455636024s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 active pruub 299.496215820s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455393791s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 299.496215820s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:36 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Oct 09 09:39:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:36 compute-1 python3.9[26132]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:39:36 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 09 09:39:36 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 09 09:39:36 compute-1 ceph-mon[9795]: 10.8 scrub starts
Oct 09 09:39:36 compute-1 ceph-mon[9795]: 10.8 scrub ok
Oct 09 09:39:36 compute-1 ceph-mon[9795]: 6.e scrub starts
Oct 09 09:39:36 compute-1 ceph-mon[9795]: 6.e scrub ok
Oct 09 09:39:36 compute-1 ceph-mon[9795]: 10.f scrub starts
Oct 09 09:39:36 compute-1 ceph-mon[9795]: 10.f scrub ok
Oct 09 09:39:36 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 09 09:39:36 compute-1 ceph-mon[9795]: osdmap e122: 3 total, 3 up, 3 in
Oct 09 09:39:36 compute-1 ceph-mon[9795]: osdmap e123: 3 total, 3 up, 3 in
Oct 09 09:39:36 compute-1 sudo[26282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpanhwuwvdqetjqfhaehispomefklyxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002776.316975-225-187590443801600/AnsiballZ_sefcontext.py'
Oct 09 09:39:36 compute-1 sudo[26282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:36.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:36 compute-1 python3.9[26284]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 09 09:39:36 compute-1 sudo[26282]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:37 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Oct 09 09:39:37 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 09 09:39:37 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 09 09:39:37 compute-1 ceph-mon[9795]: 10.5 scrub starts
Oct 09 09:39:37 compute-1 ceph-mon[9795]: 10.5 scrub ok
Oct 09 09:39:37 compute-1 ceph-mon[9795]: 6.5 scrub starts
Oct 09 09:39:37 compute-1 ceph-mon[9795]: 6.5 scrub ok
Oct 09 09:39:37 compute-1 ceph-mon[9795]: 10.4 scrub starts
Oct 09 09:39:37 compute-1 ceph-mon[9795]: 10.4 scrub ok
Oct 09 09:39:37 compute-1 ceph-mon[9795]: pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 09 09:39:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 09 09:39:37 compute-1 ceph-mon[9795]: osdmap e124: 3 total, 3 up, 3 in
Oct 09 09:39:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:37.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:37 compute-1 python3.9[26435]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:39:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:39:38 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 09 09:39:38 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 09 09:39:38 compute-1 sudo[26591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvsugqvyjeswgutaplqpkuzowqceifxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002778.0027592-279-246640210661682/AnsiballZ_dnf.py'
Oct 09 09:39:38 compute-1 sudo[26591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:38 compute-1 ceph-mon[9795]: 6.2 scrub starts
Oct 09 09:39:38 compute-1 ceph-mon[9795]: 6.2 scrub ok
Oct 09 09:39:38 compute-1 ceph-mon[9795]: 10.1 scrub starts
Oct 09 09:39:38 compute-1 ceph-mon[9795]: 10.1 scrub ok
Oct 09 09:39:38 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Oct 09 09:39:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730270386s) [1] async=[1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 40'1059 active pruub 302.409027100s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:38 compute-1 python3.9[26593]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:39:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:38.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:39 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 09 09:39:39 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:39:39 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Oct 09 09:39:39 compute-1 ceph-mon[9795]: 6.a scrub starts
Oct 09 09:39:39 compute-1 ceph-mon[9795]: 6.a scrub ok
Oct 09 09:39:39 compute-1 ceph-mon[9795]: osdmap e125: 3 total, 3 up, 3 in
Oct 09 09:39:39 compute-1 ceph-mon[9795]: pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 09 09:39:39 compute-1 sudo[26591]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:39.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3c8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:39 compute-1 sudo[26759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owxxsyxvbwpaticnmylsstasqhvvgxpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002779.440969-303-164594741771782/AnsiballZ_command.py'
Oct 09 09:39:39 compute-1 sudo[26759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:39 compute-1 python3.9[26761]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:39:40 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 09 09:39:40 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 09 09:39:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:40 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3bc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:40 compute-1 ceph-mon[9795]: 6.3 scrub starts
Oct 09 09:39:40 compute-1 ceph-mon[9795]: 6.3 scrub ok
Oct 09 09:39:40 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 09 09:39:40 compute-1 ceph-mon[9795]: osdmap e126: 3 total, 3 up, 3 in
Oct 09 09:39:40 compute-1 sudo[26759]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:40.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:40 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3bc002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:40 compute-1 sudo[27046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttovjvlmldvuauxmetrvlkojszonglir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002780.6221416-327-2985671256313/AnsiballZ_file.py'
Oct 09 09:39:40 compute-1 sudo[27046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:41 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 09 09:39:41 compute-1 python3.9[27048]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 09 09:39:41 compute-1 sudo[27046]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:41 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 09 09:39:41 compute-1 ceph-mon[9795]: 10.1b deep-scrub starts
Oct 09 09:39:41 compute-1 ceph-mon[9795]: 10.1b deep-scrub ok
Oct 09 09:39:41 compute-1 ceph-mon[9795]: 10.1a scrub starts
Oct 09 09:39:41 compute-1 ceph-mon[9795]: 10.1a scrub ok
Oct 09 09:39:41 compute-1 ceph-mon[9795]: pgmap v174: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 09 09:39:41 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Oct 09 09:39:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:41.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:41 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538806915s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 active pruub 300.480133057s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:41 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093941 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:39:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:41 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3b80016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:41 compute-1 python3.9[27199]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:39:42 compute-1 sudo[27351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvthqkpzygbwdyjnldiqwjraxfygwbom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002781.8879244-375-100091525157275/AnsiballZ_dnf.py'
Oct 09 09:39:42 compute-1 sudo[27351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:42 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct 09 09:39:42 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct 09 09:39:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:42 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3bc003060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:42 compute-1 python3.9[27353]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:39:42 compute-1 ceph-mon[9795]: 10.1d scrub starts
Oct 09 09:39:42 compute-1 ceph-mon[9795]: 10.1d scrub ok
Oct 09 09:39:42 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 09 09:39:42 compute-1 ceph-mon[9795]: osdmap e127: 3 total, 3 up, 3 in
Oct 09 09:39:42 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Oct 09 09:39:42 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:42 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:42.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:42 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3bc003060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:43 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 09 09:39:43 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 09 09:39:43 compute-1 sudo[27351]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:43 compute-1 ceph-mon[9795]: 10.9 scrub starts
Oct 09 09:39:43 compute-1 ceph-mon[9795]: 10.9 scrub ok
Oct 09 09:39:43 compute-1 ceph-mon[9795]: pgmap v176: 337 pgs: 337 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:39:43 compute-1 ceph-mon[9795]: osdmap e128: 3 total, 3 up, 3 in
Oct 09 09:39:43 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Oct 09 09:39:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:39:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:43.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:39:43 compute-1 sudo[27505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sesyaxrlsmvcguccxulzurbnvntpaqzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002783.4238825-402-247743328448090/AnsiballZ_dnf.py'
Oct 09 09:39:43 compute-1 sudo[27505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:43 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3bc003060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:43 compute-1 python3.9[27507]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:39:44 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Oct 09 09:39:44 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Oct 09 09:39:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:39:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:44 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3b80021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:44 compute-1 ceph-mon[9795]: 10.c scrub starts
Oct 09 09:39:44 compute-1 ceph-mon[9795]: 10.c scrub ok
Oct 09 09:39:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:39:44 compute-1 ceph-mon[9795]: osdmap e129: 3 total, 3 up, 3 in
Oct 09 09:39:44 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Oct 09 09:39:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.899759293s) [2] async=[2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 40'1059 active pruub 308.632507324s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.334880) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784334901, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3407, "num_deletes": 251, "total_data_size": 7305240, "memory_usage": 7417344, "flush_reason": "Manual Compaction"}
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Oct 09 09:39:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784345208, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 4787675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7168, "largest_seqno": 10570, "table_properties": {"data_size": 4771843, "index_size": 10214, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4549, "raw_key_size": 42141, "raw_average_key_size": 23, "raw_value_size": 4736781, "raw_average_value_size": 2625, "num_data_blocks": 444, "num_entries": 1804, "num_filter_entries": 1804, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002664, "oldest_key_time": 1760002664, "file_creation_time": 1760002784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 10356 microseconds, and 7658 cpu microseconds.
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.345235) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 4787675 bytes OK
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.345248) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.345828) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.345842) EVENT_LOG_v1 {"time_micros": 1760002784345840, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.345851) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 7288352, prev total WAL file size 7288352, number of live WAL files 2.
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.346859) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(4675KB)], [18(12MB)]
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784346875, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18169968, "oldest_snapshot_seqno": -1}
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3979 keys, 14451848 bytes, temperature: kUnknown
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784379015, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 14451848, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14419140, "index_size": 21654, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 101480, "raw_average_key_size": 25, "raw_value_size": 14340151, "raw_average_value_size": 3603, "num_data_blocks": 936, "num_entries": 3979, "num_filter_entries": 3979, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760002784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.379155) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14451848 bytes
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.379535) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 564.5 rd, 449.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.6, 12.8 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.0) OK, records in: 4503, records dropped: 524 output_compression: NoCompression
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.379549) EVENT_LOG_v1 {"time_micros": 1760002784379542, "job": 8, "event": "compaction_finished", "compaction_time_micros": 32187, "compaction_time_cpu_micros": 22593, "output_level": 6, "num_output_files": 1, "total_output_size": 14451848, "num_input_records": 4503, "num_output_records": 3979, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784380100, "job": 8, "event": "table_file_deletion", "file_number": 20}
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784381486, "job": 8, "event": "table_file_deletion", "file_number": 18}
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.346820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.381508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.381511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.381512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.381513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.381514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:39:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:39:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:44.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:39:44 compute-1 sudo[27505]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:44 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3bc003060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:45 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct 09 09:39:45 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct 09 09:39:45 compute-1 ceph-mon[9795]: 10.6 deep-scrub starts
Oct 09 09:39:45 compute-1 ceph-mon[9795]: 10.6 deep-scrub ok
Oct 09 09:39:45 compute-1 ceph-mon[9795]: pgmap v179: 337 pgs: 337 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:45 compute-1 ceph-mon[9795]: osdmap e130: 3 total, 3 up, 3 in
Oct 09 09:39:45 compute-1 sudo[27658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcjjrwchyojjwkdewxcnzokldhbbbhhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002785.1250143-438-255014177011447/AnsiballZ_stat.py'
Oct 09 09:39:45 compute-1 sudo[27658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Oct 09 09:39:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:39:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:45.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:39:45 compute-1 python3.9[27661]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:39:45 compute-1 sudo[27658]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:45 compute-1 kernel: ganesha.nfsd[26598]: segfault at 50 ip 00007fc474e1632e sp 00007fc43b7fd210 error 4 in libntirpc.so.5.8[7fc474dfb000+2c000] likely on CPU 0 (core 0, socket 0)
Oct 09 09:39:45 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 09 09:39:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:45 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3bc003060 fd 38 proxy ignored for local
Oct 09 09:39:45 compute-1 systemd[1]: Started Process Core Dump (PID 27688/UID 0).
Oct 09 09:39:45 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Oct 09 09:39:46 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Oct 09 09:39:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Oct 09 09:39:46 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:46 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:46 compute-1 sudo[27815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uokjdoyhgmczrjhlbcefebzqttztjxcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002785.7216794-462-180986353009006/AnsiballZ_slurp.py'
Oct 09 09:39:46 compute-1 sudo[27815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:46 compute-1 python3.9[27817]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 09 09:39:46 compute-1 sudo[27815]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:46 compute-1 ceph-mon[9795]: 10.a scrub starts
Oct 09 09:39:46 compute-1 ceph-mon[9795]: 10.a scrub ok
Oct 09 09:39:46 compute-1 ceph-mon[9795]: osdmap e131: 3 total, 3 up, 3 in
Oct 09 09:39:46 compute-1 ceph-mon[9795]: osdmap e132: 3 total, 3 up, 3 in
Oct 09 09:39:46 compute-1 systemd-coredump[27689]: Process 25059 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 43:
                                                   #0  0x00007fc474e1632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Oct 09 09:39:46 compute-1 systemd[1]: systemd-coredump@4-27688-0.service: Deactivated successfully.
Oct 09 09:39:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:46.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:46 compute-1 podman[27849]: 2025-10-09 09:39:46.664279495 +0000 UTC m=+0.023649252 container died a5e3b34a367621b8a1cb9a528a0439507bd5268852e1dadd9bb23099180a3d9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:39:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-0ab757a1a3ce8061dfdc6015faefa57dc0a0913c0474d9fbdd1b038c8af3de70-merged.mount: Deactivated successfully.
Oct 09 09:39:46 compute-1 podman[27849]: 2025-10-09 09:39:46.685734037 +0000 UTC m=+0.045103794 container remove a5e3b34a367621b8a1cb9a528a0439507bd5268852e1dadd9bb23099180a3d9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 09 09:39:46 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Main process exited, code=exited, status=139/n/a
Oct 09 09:39:46 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Failed with result 'exit-code'.
Oct 09 09:39:46 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct 09 09:39:46 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct 09 09:39:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Oct 09 09:39:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:39:47 compute-1 sshd-session[25126]: Connection closed by 192.168.122.30 port 33116
Oct 09 09:39:47 compute-1 sshd-session[25123]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:39:47 compute-1 systemd[1]: session-23.scope: Deactivated successfully.
Oct 09 09:39:47 compute-1 systemd[1]: session-23.scope: Consumed 13.106s CPU time.
Oct 09 09:39:47 compute-1 systemd-logind[798]: Session 23 logged out. Waiting for processes to exit.
Oct 09 09:39:47 compute-1 systemd-logind[798]: Removed session 23.
Oct 09 09:39:47 compute-1 ceph-mon[9795]: 10.0 scrub starts
Oct 09 09:39:47 compute-1 ceph-mon[9795]: 10.0 scrub ok
Oct 09 09:39:47 compute-1 ceph-mon[9795]: pgmap v183: 337 pgs: 1 activating+remapped, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 5/222 objects misplaced (2.252%)
Oct 09 09:39:47 compute-1 ceph-mon[9795]: osdmap e133: 3 total, 3 up, 3 in
Oct 09 09:39:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:39:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:47.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:39:47 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct 09 09:39:47 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct 09 09:39:48 compute-1 ceph-mon[9795]: 10.d scrub starts
Oct 09 09:39:48 compute-1 ceph-mon[9795]: 10.d scrub ok
Oct 09 09:39:48 compute-1 sudo[27882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:39:48 compute-1 sudo[27882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:39:48 compute-1 sudo[27882]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:48.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:49 compute-1 ceph-mon[9795]: 10.b scrub starts
Oct 09 09:39:49 compute-1 ceph-mon[9795]: 10.b scrub ok
Oct 09 09:39:49 compute-1 ceph-mon[9795]: pgmap v185: 337 pgs: 1 activating+remapped, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 5/222 objects misplaced (2.252%)
Oct 09 09:39:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:39:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:49.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:39:50 compute-1 sudo[27908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:39:50 compute-1 sudo[27908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:39:50 compute-1 sudo[27908]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:50 compute-1 sudo[27933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:39:50 compute-1 sudo[27933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:39:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:39:50 compute-1 sudo[27933]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:50.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:51 compute-1 ceph-mon[9795]: pgmap v186: 337 pgs: 1 activating+remapped, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 5/222 objects misplaced (2.252%)
Oct 09 09:39:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:51.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:51 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093951 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:39:52 compute-1 sshd-session[27988]: Accepted publickey for zuul from 192.168.122.30 port 43600 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:39:52 compute-1 systemd-logind[798]: New session 24 of user zuul.
Oct 09 09:39:52 compute-1 systemd[1]: Started Session 24 of User zuul.
Oct 09 09:39:52 compute-1 sshd-session[27988]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:39:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:52.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:52 compute-1 python3.9[28141]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:39:53 compute-1 ceph-mon[9795]: pgmap v187: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:39:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:39:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:53.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:39:53 compute-1 python3.9[28296]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:39:54 compute-1 python3.9[28489]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:39:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:54.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:55 compute-1 sshd-session[27991]: Connection closed by 192.168.122.30 port 43600
Oct 09 09:39:55 compute-1 sshd-session[27988]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:39:55 compute-1 systemd[1]: session-24.scope: Deactivated successfully.
Oct 09 09:39:55 compute-1 systemd[1]: session-24.scope: Consumed 1.686s CPU time.
Oct 09 09:39:55 compute-1 systemd-logind[798]: Session 24 logged out. Waiting for processes to exit.
Oct 09 09:39:55 compute-1 systemd-logind[798]: Removed session 24.
Oct 09 09:39:55 compute-1 ceph-mon[9795]: pgmap v188: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:55.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:55 compute-1 sudo[28516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:39:55 compute-1 sudo[28516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:39:55 compute-1 sudo[28516]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:56.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:39:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:39:56 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Scheduled restart job, restart counter is at 5.
Oct 09 09:39:56 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:39:56 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:39:57 compute-1 podman[28580]: 2025-10-09 09:39:57.118789565 +0000 UTC m=+0.026235590 container create a4769768d3029ab5da797173bc20fc04ecb1b13fe2b03f5aa021ce964d7fc399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 09 09:39:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a7d345292b2823e940a0ef2c8f05233f069a14a6edb83820712d6607dd684d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 09 09:39:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a7d345292b2823e940a0ef2c8f05233f069a14a6edb83820712d6607dd684d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:39:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a7d345292b2823e940a0ef2c8f05233f069a14a6edb83820712d6607dd684d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:39:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a7d345292b2823e940a0ef2c8f05233f069a14a6edb83820712d6607dd684d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.douegr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:39:57 compute-1 podman[28580]: 2025-10-09 09:39:57.166496627 +0000 UTC m=+0.073942673 container init a4769768d3029ab5da797173bc20fc04ecb1b13fe2b03f5aa021ce964d7fc399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 09 09:39:57 compute-1 podman[28580]: 2025-10-09 09:39:57.170469821 +0000 UTC m=+0.077915846 container start a4769768d3029ab5da797173bc20fc04ecb1b13fe2b03f5aa021ce964d7fc399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 09 09:39:57 compute-1 bash[28580]: a4769768d3029ab5da797173bc20fc04ecb1b13fe2b03f5aa021ce964d7fc399
Oct 09 09:39:57 compute-1 podman[28580]: 2025-10-09 09:39:57.107880822 +0000 UTC m=+0.015326858 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:39:57 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 09 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 09 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 09 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 09 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 09 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 09 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 09 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:39:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:57.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:57 compute-1 ceph-mon[9795]: pgmap v189: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:58.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:39:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:59.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:59 compute-1 ceph-mon[9795]: pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:40:00 compute-1 systemd[1]: Starting system activity accounting tool...
Oct 09 09:40:00 compute-1 sshd-session[28636]: Accepted publickey for zuul from 192.168.122.30 port 47366 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:40:00 compute-1 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 09:40:00 compute-1 systemd[1]: Finished system activity accounting tool.
Oct 09 09:40:00 compute-1 systemd-logind[798]: New session 25 of user zuul.
Oct 09 09:40:00 compute-1 systemd[1]: Started Session 25 of User zuul.
Oct 09 09:40:00 compute-1 sshd-session[28636]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:40:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:00.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:00 compute-1 ceph-mon[9795]: overall HEALTH_OK
Oct 09 09:40:01 compute-1 python3.9[28790]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:40:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:01.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:01 compute-1 python3.9[28945]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:40:01 compute-1 ceph-mon[9795]: pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:40:02 compute-1 sudo[29099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cygzdfsgjeflkzcnbnjjsjvqeslrjefh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002802.0771258-81-52809920965814/AnsiballZ_setup.py'
Oct 09 09:40:02 compute-1 sudo[29099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:02 compute-1 python3.9[29101]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:40:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:02.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:02 compute-1 sudo[29099]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:02 compute-1 sudo[29183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwddzxylrksttevwmlfahlgirujmgxqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002802.0771258-81-52809920965814/AnsiballZ_dnf.py'
Oct 09 09:40:02 compute-1 sudo[29183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:03 compute-1 python3.9[29185]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:40:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:03 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:40:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:03 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:40:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:03.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:03 compute-1 ceph-mon[9795]: pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:40:04 compute-1 sudo[29183]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:04 compute-1 sudo[29337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyltnqgxltqusszwpjdgehrghmrjlxqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002804.2888265-117-220153568456977/AnsiballZ_setup.py'
Oct 09 09:40:04 compute-1 sudo[29337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:04.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:04 compute-1 python3.9[29339]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:40:04 compute-1 sudo[29337]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:40:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:40:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:05.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:40:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:05 compute-1 sudo[29533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkbwclrqtcekdndofqzuypwyrawqidzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002805.2435703-150-256467879762127/AnsiballZ_file.py'
Oct 09 09:40:05 compute-1 sudo[29533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:05 compute-1 python3.9[29535]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:05 compute-1 sudo[29533]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:05 compute-1 ceph-mon[9795]: pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Oct 09 09:40:06 compute-1 sudo[29685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwigpihgmmrmfhqxsidxxpwmnmakesxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002805.9496355-174-277402598333353/AnsiballZ_command.py'
Oct 09 09:40:06 compute-1 sudo[29685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:06 compute-1 python3.9[29687]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:40:06 compute-1 sudo[29685]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:40:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:06.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:40:07 compute-1 sudo[29848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aneukarqeqynnoezyyklzjakupxovmet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002806.7120779-198-35511145514879/AnsiballZ_stat.py'
Oct 09 09:40:07 compute-1 sudo[29848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:07 compute-1 python3.9[29850]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:07 compute-1 sudo[29848]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:07 compute-1 sudo[29927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxdqyvhlfkzednditgdprfmvdijmfopi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002806.7120779-198-35511145514879/AnsiballZ_file.py'
Oct 09 09:40:07 compute-1 sudo[29927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:07.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:07 compute-1 python3.9[29929]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:07 compute-1 sudo[29927]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:07 compute-1 sudo[30079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovcgfsmmztdxmynybhjeremwciuzawvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002807.705719-234-42482331471142/AnsiballZ_stat.py'
Oct 09 09:40:07 compute-1 sudo[30079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:07 compute-1 ceph-mon[9795]: pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Oct 09 09:40:08 compute-1 python3.9[30081]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:08 compute-1 sudo[30079]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:08 compute-1 sudo[30157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zexivvovbghwuxjqqdrarlaowrzcogly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002807.705719-234-42482331471142/AnsiballZ_file.py'
Oct 09 09:40:08 compute-1 sudo[30157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:08 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094008 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:40:08 compute-1 python3.9[30159]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:08 compute-1 sudo[30157]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:08 compute-1 sudo[30184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:40:08 compute-1 sudo[30184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:40:08 compute-1 sudo[30184]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:08.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:08 compute-1 sudo[30334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqabvudghwdhwepzekpwdbnelxyajrds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002808.619918-273-259135115216160/AnsiballZ_ini_file.py'
Oct 09 09:40:08 compute-1 sudo[30334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094009 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [NOTICE] 281/094009 (4) : haproxy version is 2.3.17-d1c9119
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [NOTICE] 281/094009 (4) : path to executable is /usr/local/sbin/haproxy
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [ALERT] 281/094009 (4) : backend 'backend' has no server available!
Oct 09 09:40:09 compute-1 python3.9[30336]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:09 compute-1 sudo[30334]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 09 09:40:09 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 09 09:40:09 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:40:09 compute-1 sudo[30501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vepkypppvvkrcxuzumcqzgwowheskyis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002809.2373314-273-6038446419848/AnsiballZ_ini_file.py'
Oct 09 09:40:09 compute-1 sudo[30501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:40:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:09.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:40:09 compute-1 python3.9[30503]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a64000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:09 compute-1 sudo[30501]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:09 compute-1 ceph-mon[9795]: pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Oct 09 09:40:09 compute-1 sudo[30655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alvzjiesllmoqqbyqetatxtzqnszcjzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002809.768176-273-180042155526968/AnsiballZ_ini_file.py'
Oct 09 09:40:09 compute-1 sudo[30655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:10 compute-1 python3.9[30657]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:10 compute-1 sudo[30655]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:10 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:10 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a58001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:10 compute-1 sudo[30807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvbrdwrdvtdaflsqmyaxjffmwgjgiamv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002810.2492023-273-123099009156073/AnsiballZ_ini_file.py'
Oct 09 09:40:10 compute-1 sudo[30807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:10 compute-1 python3.9[30809]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:10 compute-1 sudo[30807]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:10.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:10 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:10 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a500034a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:11 compute-1 sudo[30959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpvwnbaninikpwujsmtznxubhymgtaud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002810.9090815-366-116014474164661/AnsiballZ_dnf.py'
Oct 09 09:40:11 compute-1 sudo[30959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:11 compute-1 python3.9[30961]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:40:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:11.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:11 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094011 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:40:11 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:11 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a58002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:11 compute-1 ceph-mon[9795]: pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Oct 09 09:40:12 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:12 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a58002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:12 compute-1 sudo[30959]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:40:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:12.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:40:12 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:12 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a58002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:12 compute-1 sudo[31113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aidkbjsucsdscgwitzsktaihtmuhplct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002812.6815112-399-212078206735290/AnsiballZ_setup.py'
Oct 09 09:40:12 compute-1 sudo[31113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:13 compute-1 python3.9[31115]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:40:13 compute-1 sudo[31113]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:40:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:13.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:40:13 compute-1 sudo[31268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkeuirgworigqqotiqqtiamkdegegejq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002813.3355818-423-105355242584825/AnsiballZ_stat.py'
Oct 09 09:40:13 compute-1 sudo[31268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:13 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:13 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a58002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:13 compute-1 python3.9[31270]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:40:13 compute-1 sudo[31268]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:13 compute-1 ceph-mon[9795]: pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 853 B/s wr, 2 op/s
Oct 09 09:40:14 compute-1 sudo[31420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzjjuyfyucsulvnuvgesoojzvtynumte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002813.9203856-450-99781122283887/AnsiballZ_stat.py'
Oct 09 09:40:14 compute-1 sudo[31420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:14 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:14 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50003dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:14 compute-1 python3.9[31422]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:40:14 compute-1 sudo[31420]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:14.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:14 compute-1 sudo[31572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dinfzgnpckspvasokhjruncrzepusotm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002814.51358-480-69482833722772/AnsiballZ_service_facts.py'
Oct 09 09:40:14 compute-1 sudo[31572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:14 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:14 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a58002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:15 compute-1 python3.9[31574]: ansible-service_facts Invoked
Oct 09 09:40:15 compute-1 network[31591]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 09:40:15 compute-1 network[31592]: 'network-scripts' will be removed from distribution in near future.
Oct 09 09:40:15 compute-1 network[31593]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 09:40:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:15.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:15 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a58002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:15 compute-1 ceph-mon[9795]: pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Oct 09 09:40:16 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:16 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a5c001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:16.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:16 compute-1 sudo[31572]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:16 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:16 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:40:16 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:16 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a5c001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:17.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:17 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:17 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a580046f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:17 compute-1 ceph-mon[9795]: pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:40:18 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:18 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a580046f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:18.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:18 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:18 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a580046f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:18 compute-1 sudo[31881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzpvbkibkksdxjcwrbgheiimqupxixkn ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1760002818.6581342-519-209489739952333/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1760002818.6581342-519-209489739952333/args'
Oct 09 09:40:18 compute-1 sudo[31881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:18 compute-1 sudo[31881]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:19 compute-1 sudo[32049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlyspwonfyuoauvecblxapodndyniuww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002819.2532034-552-24956966108699/AnsiballZ_dnf.py'
Oct 09 09:40:19 compute-1 sudo[32049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:40:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:19.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:40:19 compute-1 python3.9[32051]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:40:19 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:19 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a5c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:19 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:19 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:40:19 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:19 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:40:19 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:19 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:40:19 compute-1 ceph-mon[9795]: pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 09 09:40:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:40:20 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:20 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a580046f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:20 compute-1 sudo[32049]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:20.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:20 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:20 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:40:20 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:20 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a580046f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:40:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:21.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:40:21 compute-1 sudo[32203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vttpalersmcjibqoyyllxqnfzlrtjfkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002821.0868585-591-203602064358212/AnsiballZ_package_facts.py'
Oct 09 09:40:21 compute-1 sudo[32203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:21 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a580046f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:21 compute-1 python3.9[32205]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 09 09:40:21 compute-1 ceph-mon[9795]: pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 09 09:40:22 compute-1 sudo[32203]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:22 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:22 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a5c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:22.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:22 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:22 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a580046f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:23 compute-1 sudo[32355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhowlnoobpxofwueyprdmxgyxbddvzrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002822.8280551-621-279814411965862/AnsiballZ_stat.py'
Oct 09 09:40:23 compute-1 sudo[32355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:23 compute-1 python3.9[32357]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:23 compute-1 sudo[32355]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:23 compute-1 sudo[32434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnxtdtljincsnsyktojielybqblburkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002822.8280551-621-279814411965862/AnsiballZ_file.py'
Oct 09 09:40:23 compute-1 sudo[32434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:23.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:23 compute-1 python3.9[32436]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:23 compute-1 sudo[32434]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:23 compute-1 kernel: ganesha.nfsd[30402]: segfault at 50 ip 00007f6b0f7a932e sp 00007f6ad8ff8210 error 4 in libntirpc.so.5.8[7f6b0f78e000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 09 09:40:23 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 09 09:40:23 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:23 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50007030 fd 38 proxy ignored for local
Oct 09 09:40:23 compute-1 systemd[1]: Started Process Core Dump (PID 32461/UID 0).
Oct 09 09:40:23 compute-1 ceph-mon[9795]: pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 09 09:40:24 compute-1 sudo[32588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqospapgchfcsgrqezslxyxjfimuasgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002823.8514805-658-109243784056222/AnsiballZ_stat.py'
Oct 09 09:40:24 compute-1 sudo[32588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:24 compute-1 python3.9[32590]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:24 compute-1 sudo[32588]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:24 compute-1 sudo[32666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rskfeehikrwhgyapyfnssyrgpqggonal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002823.8514805-658-109243784056222/AnsiballZ_file.py'
Oct 09 09:40:24 compute-1 sudo[32666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:24 compute-1 python3.9[32668]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:24 compute-1 sudo[32666]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:24 compute-1 systemd-coredump[32462]: Process 28596 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 45:
                                                   #0  0x00007f6b0f7a932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Oct 09 09:40:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:24.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:24 compute-1 systemd[1]: systemd-coredump@5-32461-0.service: Deactivated successfully.
Oct 09 09:40:24 compute-1 podman[32700]: 2025-10-09 09:40:24.744090341 +0000 UTC m=+0.017534801 container died a4769768d3029ab5da797173bc20fc04ecb1b13fe2b03f5aa021ce964d7fc399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:40:24 compute-1 systemd[1]: var-lib-containers-storage-overlay-a5a7d345292b2823e940a0ef2c8f05233f069a14a6edb83820712d6607dd684d-merged.mount: Deactivated successfully.
Oct 09 09:40:24 compute-1 podman[32700]: 2025-10-09 09:40:24.761319844 +0000 UTC m=+0.034764295 container remove a4769768d3029ab5da797173bc20fc04ecb1b13fe2b03f5aa021ce964d7fc399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 09 09:40:24 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Main process exited, code=exited, status=139/n/a
Oct 09 09:40:24 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Failed with result 'exit-code'.
Oct 09 09:40:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:25.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:25 compute-1 sshd-session[1521]: Received disconnect from 192.168.26.46 port 40706:11: disconnected by user
Oct 09 09:40:25 compute-1 sshd-session[1521]: Disconnected from user zuul 192.168.26.46 port 40706
Oct 09 09:40:25 compute-1 sshd-session[1518]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:40:25 compute-1 systemd-logind[798]: Session 3 logged out. Waiting for processes to exit.
Oct 09 09:40:25 compute-1 systemd[1]: session-3.scope: Deactivated successfully.
Oct 09 09:40:25 compute-1 systemd[1]: session-3.scope: Consumed 6.293s CPU time.
Oct 09 09:40:25 compute-1 systemd-logind[798]: Removed session 3.
Oct 09 09:40:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:25 compute-1 ceph-mon[9795]: pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Oct 09 09:40:25 compute-1 sudo[32859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdswzlnkpurhknayzwmcpjxibjotephb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002825.4776862-712-98968814143458/AnsiballZ_lineinfile.py'
Oct 09 09:40:25 compute-1 sudo[32859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:26 compute-1 python3.9[32861]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:26 compute-1 sudo[32859]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:26.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:27 compute-1 sudo[33011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hckfiblvsuizcrymnytdjpdpautxdnct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002826.9814963-757-146198739284801/AnsiballZ_setup.py'
Oct 09 09:40:27 compute-1 sudo[33011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:27 compute-1 python3.9[33013]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:40:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:27.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:27 compute-1 sudo[33011]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:28 compute-1 ceph-mon[9795]: pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 09 09:40:28 compute-1 sudo[33096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvsmlqaaqfqysviieoerwskgvpssnrde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002826.9814963-757-146198739284801/AnsiballZ_systemd.py'
Oct 09 09:40:28 compute-1 sudo[33096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:28 compute-1 python3.9[33098]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:40:28 compute-1 sudo[33096]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:28 compute-1 sudo[33125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:40:28 compute-1 sudo[33125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:40:28 compute-1 sudo[33125]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:28.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:29 compute-1 sshd-session[28640]: Connection closed by 192.168.122.30 port 47366
Oct 09 09:40:29 compute-1 sshd-session[28636]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:40:29 compute-1 systemd[1]: session-25.scope: Deactivated successfully.
Oct 09 09:40:29 compute-1 systemd[1]: session-25.scope: Consumed 16.852s CPU time.
Oct 09 09:40:29 compute-1 systemd-logind[798]: Session 25 logged out. Waiting for processes to exit.
Oct 09 09:40:29 compute-1 systemd-logind[798]: Removed session 25.
Oct 09 09:40:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094029 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:40:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:29.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094029 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:40:30 compute-1 ceph-mon[9795]: pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 09 09:40:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094030 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:40:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:30.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:31.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:32 compute-1 ceph-mon[9795]: pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 09 09:40:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:32.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:33.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:34 compute-1 ceph-mon[9795]: pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 09 09:40:34 compute-1 sshd-session[33153]: Accepted publickey for zuul from 192.168.122.30 port 38136 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:40:34 compute-1 systemd-logind[798]: New session 26 of user zuul.
Oct 09 09:40:34 compute-1 systemd[1]: Started Session 26 of User zuul.
Oct 09 09:40:34 compute-1 sshd-session[33153]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:40:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:40:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:34.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:40:34 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Scheduled restart job, restart counter is at 6.
Oct 09 09:40:34 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:40:34 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Start request repeated too quickly.
Oct 09 09:40:34 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Failed with result 'exit-code'.
Oct 09 09:40:34 compute-1 systemd[1]: Failed to start Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:40:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:40:35 compute-1 sudo[33306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iylsdfmlcaokjoamlbfzdligvealprvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002834.7159698-27-25904972013377/AnsiballZ_file.py'
Oct 09 09:40:35 compute-1 sudo[33306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:35 compute-1 python3.9[33308]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:35 compute-1 sudo[33306]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:40:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:35.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:40:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:35 compute-1 sudo[33459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rybxwzlpcjkdkbltgackdzxtuojuwaog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002835.3861845-63-164257137511656/AnsiballZ_stat.py'
Oct 09 09:40:35 compute-1 sudo[33459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:35 compute-1 python3.9[33461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:35 compute-1 sudo[33459]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:36 compute-1 ceph-mon[9795]: pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:40:36 compute-1 sudo[33537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrffxdbhcshxjtbadbtjbcjnocggzizv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002835.3861845-63-164257137511656/AnsiballZ_file.py'
Oct 09 09:40:36 compute-1 sudo[33537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:36 compute-1 python3.9[33539]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:36 compute-1 sudo[33537]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:36 compute-1 sshd-session[33156]: Connection closed by 192.168.122.30 port 38136
Oct 09 09:40:36 compute-1 sshd-session[33153]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:40:36 compute-1 systemd[1]: session-26.scope: Deactivated successfully.
Oct 09 09:40:36 compute-1 systemd[1]: session-26.scope: Consumed 1.096s CPU time.
Oct 09 09:40:36 compute-1 systemd-logind[798]: Session 26 logged out. Waiting for processes to exit.
Oct 09 09:40:36 compute-1 systemd-logind[798]: Removed session 26.
Oct 09 09:40:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:40:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:36.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:40:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:37.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:38 compute-1 ceph-mon[9795]: pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:40:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:38.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:40:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:39.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:40:40 compute-1 ceph-mon[9795]: pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 09 09:40:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:40:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:40.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:40:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:41.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:41 compute-1 sshd-session[33567]: Accepted publickey for zuul from 192.168.122.30 port 41058 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:40:41 compute-1 systemd-logind[798]: New session 27 of user zuul.
Oct 09 09:40:41 compute-1 systemd[1]: Started Session 27 of User zuul.
Oct 09 09:40:41 compute-1 sshd-session[33567]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:40:42 compute-1 ceph-mon[9795]: pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 09 09:40:42 compute-1 python3.9[33720]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:40:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:42.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:43 compute-1 sudo[33874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfuuwsqmosruukldgiimrsihmvmomfos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002842.7861967-60-205868820001704/AnsiballZ_file.py'
Oct 09 09:40:43 compute-1 sudo[33874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:43 compute-1 python3.9[33876]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:43 compute-1 sudo[33874]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:43.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:43 compute-1 sudo[34050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzydvsodoemxfgulodvxbxzawhhrwpoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002843.5624094-84-219351730398149/AnsiballZ_stat.py'
Oct 09 09:40:43 compute-1 sudo[34050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:44 compute-1 python3.9[34052]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:44 compute-1 sudo[34050]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:44 compute-1 ceph-mon[9795]: pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 09 09:40:44 compute-1 sudo[34128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spwakthnunkznsktonivepvskkjtcqhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002843.5624094-84-219351730398149/AnsiballZ_file.py'
Oct 09 09:40:44 compute-1 sudo[34128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:44 compute-1 python3.9[34130]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.cpgnonn1 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:44 compute-1 sudo[34128]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:40:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:44.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:40:45 compute-1 sudo[34280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geeswhvzjbnkxyriuxnmvnlrntjwdwpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002844.8293588-144-96328041436470/AnsiballZ_stat.py'
Oct 09 09:40:45 compute-1 sudo[34280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:45 compute-1 ceph-mon[9795]: pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:40:45 compute-1 python3.9[34282]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:45 compute-1 sudo[34280]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:45 compute-1 sudo[34358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdrwoawsgcrfbbyzxgjgfabytiefzzwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002844.8293588-144-96328041436470/AnsiballZ_file.py'
Oct 09 09:40:45 compute-1 sudo[34358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:45 compute-1 python3.9[34361]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.oimnt4t2 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:45 compute-1 sudo[34358]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:45.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:46 compute-1 sudo[34511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjdpnoxdltimtqjeswjjrgbwybiqssug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002845.8513556-183-229687920916707/AnsiballZ_file.py'
Oct 09 09:40:46 compute-1 sudo[34511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:46 compute-1 python3.9[34513]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:46 compute-1 sudo[34511]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:46 compute-1 sudo[34663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qycenvsaruecgclpuigsefvdkgqbycdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002846.3455153-207-155889472636847/AnsiballZ_stat.py'
Oct 09 09:40:46 compute-1 sudo[34663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:46 compute-1 python3.9[34665]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:46 compute-1 sudo[34663]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:46.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:46 compute-1 sudo[34741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyedwekueyvhyfgftivvvgukdwmdoebp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002846.3455153-207-155889472636847/AnsiballZ_file.py'
Oct 09 09:40:46 compute-1 sudo[34741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:47 compute-1 python3.9[34743]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:47 compute-1 sudo[34741]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:47 compute-1 sudo[34893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjpzlnkiwdgjpjnjnwfnpwiyglnmcubz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002847.1164188-207-126972878006508/AnsiballZ_stat.py'
Oct 09 09:40:47 compute-1 sudo[34893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:47 compute-1 ceph-mon[9795]: pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:40:47 compute-1 python3.9[34895]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:47 compute-1 sudo[34893]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:47.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:47 compute-1 sudo[34972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bllvqptfhbhuzeeupcufcaittjytajqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002847.1164188-207-126972878006508/AnsiballZ_file.py'
Oct 09 09:40:47 compute-1 sudo[34972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:47 compute-1 python3.9[34974]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:47 compute-1 sudo[34972]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:48 compute-1 sudo[35124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diejhhfhqdujkbuylkdtkatsmofnsqne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002847.9518197-276-124018329507584/AnsiballZ_file.py'
Oct 09 09:40:48 compute-1 sudo[35124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:48 compute-1 python3.9[35126]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:48 compute-1 sudo[35124]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:48 compute-1 sudo[35204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:40:48 compute-1 sudo[35204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:40:48 compute-1 sudo[35204]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:48.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:48 compute-1 sudo[35301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqkugwbhnzrakmwivbrfvgdukcyaxalg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002848.5725935-300-215667088214891/AnsiballZ_stat.py'
Oct 09 09:40:48 compute-1 sudo[35301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:48 compute-1 python3.9[35303]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:48 compute-1 sudo[35301]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:49 compute-1 sudo[35379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgrwusgvbryacxvronzqkurmhjbtsjsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002848.5725935-300-215667088214891/AnsiballZ_file.py'
Oct 09 09:40:49 compute-1 sudo[35379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:49 compute-1 python3.9[35381]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:49 compute-1 sudo[35379]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:49 compute-1 ceph-mon[9795]: pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:40:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:49.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:49 compute-1 sudo[35532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmecerdhxttwhnlkmbdigowsmgijohyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002849.460123-336-259041292529372/AnsiballZ_stat.py'
Oct 09 09:40:49 compute-1 sudo[35532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:49 compute-1 python3.9[35534]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:49 compute-1 sudo[35532]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:49 compute-1 sudo[35610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbgarehcaazgtridrzesqojjpdhkoniq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002849.460123-336-259041292529372/AnsiballZ_file.py'
Oct 09 09:40:49 compute-1 sudo[35610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:50 compute-1 python3.9[35612]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:50 compute-1 sudo[35610]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:40:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:50.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:50 compute-1 sudo[35762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnirsuoggqwjvnogsgfggoowjuehotjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002850.304846-372-74961168984874/AnsiballZ_systemd.py'
Oct 09 09:40:50 compute-1 sudo[35762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:51 compute-1 python3.9[35764]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:40:51 compute-1 systemd[1]: Reloading.
Oct 09 09:40:51 compute-1 systemd-rc-local-generator[35786]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:40:51 compute-1 systemd-sysv-generator[35789]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:40:51 compute-1 sudo[35762]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:51 compute-1 ceph-mon[9795]: pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:40:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:51.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:51 compute-1 sudo[35953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggilbvmixuuiibntgdobcqtgcbitslnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002851.387692-396-61083334793511/AnsiballZ_stat.py'
Oct 09 09:40:51 compute-1 sudo[35953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:51 compute-1 python3.9[35955]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:51 compute-1 sudo[35953]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:51 compute-1 sudo[36031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hymoqplsyczwqooxoxxqzzhuqorajseq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002851.387692-396-61083334793511/AnsiballZ_file.py'
Oct 09 09:40:51 compute-1 sudo[36031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:52 compute-1 python3.9[36033]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:52 compute-1 sudo[36031]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:52 compute-1 sudo[36183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owfxvfinwgohjlbivpxbunqllpoxaodl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002852.2486439-432-114799625334015/AnsiballZ_stat.py'
Oct 09 09:40:52 compute-1 sudo[36183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:52 compute-1 python3.9[36185]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:52 compute-1 sudo[36183]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:52.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:52 compute-1 sudo[36261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nukqsbxybhgnxkgsshvmmcyjpqvwqvgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002852.2486439-432-114799625334015/AnsiballZ_file.py'
Oct 09 09:40:52 compute-1 sudo[36261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:52 compute-1 python3.9[36263]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:52 compute-1 sudo[36261]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:53 compute-1 sudo[36413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrctcpcgxbcdelzrwjtfpkaemupckqbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002853.0983596-468-84216993059138/AnsiballZ_systemd.py'
Oct 09 09:40:53 compute-1 sudo[36413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:53 compute-1 ceph-mon[9795]: pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 09 09:40:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:53.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:53 compute-1 python3.9[36415]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:40:53 compute-1 systemd[1]: Reloading.
Oct 09 09:40:53 compute-1 systemd-rc-local-generator[36436]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:40:53 compute-1 systemd-sysv-generator[36439]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:40:53 compute-1 systemd[1]: Starting Create netns directory...
Oct 09 09:40:53 compute-1 systemd[11486]: Created slice User Background Tasks Slice.
Oct 09 09:40:53 compute-1 systemd[11486]: Starting Cleanup of User's Temporary Files and Directories...
Oct 09 09:40:53 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 09:40:53 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 09:40:53 compute-1 systemd[1]: Finished Create netns directory.
Oct 09 09:40:53 compute-1 systemd[11486]: Finished Cleanup of User's Temporary Files and Directories.
Oct 09 09:40:53 compute-1 sudo[36413]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:54 compute-1 python3.9[36608]: ansible-ansible.builtin.service_facts Invoked
Oct 09 09:40:54 compute-1 network[36625]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 09:40:54 compute-1 network[36626]: 'network-scripts' will be removed from distribution in near future.
Oct 09 09:40:54 compute-1 network[36627]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 09:40:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:54.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:55 compute-1 ceph-mon[9795]: pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:40:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:40:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:55.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:40:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:56 compute-1 sudo[36730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:40:56 compute-1 sudo[36730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:40:56 compute-1 sudo[36730]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:56 compute-1 sudo[36759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:40:56 compute-1 sudo[36759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:40:56 compute-1 sudo[36759]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:56.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:57 compute-1 sudo[36970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtgcdvqeasdshjouyrvfdcdzrjvgjaav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002857.129336-546-174914165483452/AnsiballZ_stat.py'
Oct 09 09:40:57 compute-1 sudo[36970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:57 compute-1 ceph-mon[9795]: pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:40:57 compute-1 python3.9[36973]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:57 compute-1 sudo[36970]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:57.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:57 compute-1 sudo[37049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouyrhkaujrtdjbjxlkntjpzdailvilfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002857.129336-546-174914165483452/AnsiballZ_file.py'
Oct 09 09:40:57 compute-1 sudo[37049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:57 compute-1 python3.9[37051]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:57 compute-1 sudo[37049]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:58 compute-1 sudo[37201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piagqxuqlxqaflxusehraxixdwgoqbdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002858.1330154-585-69101075414880/AnsiballZ_file.py'
Oct 09 09:40:58 compute-1 sudo[37201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:58 compute-1 python3.9[37203]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:58 compute-1 sudo[37201]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:58.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:58 compute-1 sudo[37353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqnlecroczxunntjjgjngknequhtgyjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002858.663575-609-240604854823930/AnsiballZ_stat.py'
Oct 09 09:40:58 compute-1 sudo[37353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:59 compute-1 python3.9[37355]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:59 compute-1 sudo[37353]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:59 compute-1 sudo[37431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzfrmmdcgacduchohujzbogyoffazoti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002858.663575-609-240604854823930/AnsiballZ_file.py'
Oct 09 09:40:59 compute-1 sudo[37431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:40:59 compute-1 ceph-mon[9795]: pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:40:59 compute-1 python3.9[37433]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:59 compute-1 sudo[37431]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:40:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:40:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:59.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:41:00 compute-1 sudo[37584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fevphuckwjsrdzimqgkylggvejybfnom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002859.7288098-654-117073731800129/AnsiballZ_timezone.py'
Oct 09 09:41:00 compute-1 sudo[37584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:00 compute-1 python3.9[37586]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 09 09:41:00 compute-1 systemd[1]: Starting Time & Date Service...
Oct 09 09:41:00 compute-1 systemd[1]: Started Time & Date Service.
Oct 09 09:41:00 compute-1 sudo[37584]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:00.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:00 compute-1 sudo[37740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhkpzxhdvjzyyeavhkbiekpehybvjimd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002860.7025971-681-39004691704077/AnsiballZ_file.py'
Oct 09 09:41:00 compute-1 sudo[37740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:01 compute-1 python3.9[37742]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:01 compute-1 sudo[37740]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:01 compute-1 ceph-mon[9795]: pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:01 compute-1 sudo[37893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkwefdgaedkzuquwviglhqglohxbvdtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002861.217225-705-186863743425936/AnsiballZ_stat.py'
Oct 09 09:41:01 compute-1 sudo[37893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:01 compute-1 python3.9[37895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:41:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:01.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:41:01 compute-1 sudo[37893]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:01 compute-1 sudo[37971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-optibftvdiswexoimixlvcbwgifaiqjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002861.217225-705-186863743425936/AnsiballZ_file.py'
Oct 09 09:41:01 compute-1 sudo[37971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:01 compute-1 python3.9[37973]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:01 compute-1 sudo[37971]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:01 compute-1 sudo[37974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:41:01 compute-1 sudo[37974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:41:01 compute-1 sudo[37974]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:02 compute-1 sudo[38148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lanyzuqyyazthkbsjrffgsnmfudehlqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002862.1155205-741-2828569937449/AnsiballZ_stat.py'
Oct 09 09:41:02 compute-1 sudo[38148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:02 compute-1 python3.9[38150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:02 compute-1 sudo[38148]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:02 compute-1 sudo[38226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkkeohlvnjwrmibhdaboqdlihjbeiqgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002862.1155205-741-2828569937449/AnsiballZ_file.py'
Oct 09 09:41:02 compute-1 sudo[38226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:02.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:02 compute-1 python3.9[38228]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.w9_u1sar recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:02 compute-1 sudo[38226]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:41:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:41:03 compute-1 sudo[38378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpjspytqcxoznwrubzxzjvdkqcniabed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002862.954196-777-103343711938129/AnsiballZ_stat.py'
Oct 09 09:41:03 compute-1 sudo[38378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:03 compute-1 python3.9[38380]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:03 compute-1 sudo[38378]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:03 compute-1 sudo[38457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxyyqunrdmmbdthnpqknpryyjwuuntdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002862.954196-777-103343711938129/AnsiballZ_file.py'
Oct 09 09:41:03 compute-1 sudo[38457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:41:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:03.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:41:03 compute-1 python3.9[38459]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:03 compute-1 sudo[38457]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:03 compute-1 ceph-mon[9795]: pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 09 09:41:04 compute-1 sudo[38609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhnvsepdocckuyicbsllrvecsdozgcxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002863.8474243-816-64578134426113/AnsiballZ_command.py'
Oct 09 09:41:04 compute-1 sudo[38609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:04 compute-1 python3.9[38611]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:41:04 compute-1 sudo[38609]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:04.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:04 compute-1 sudo[38762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjzdmrdxcpyjitvbparpcpulwdldgipn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760002864.4706132-840-261421635256090/AnsiballZ_edpm_nftables_from_files.py'
Oct 09 09:41:04 compute-1 sudo[38762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:41:04 compute-1 python3[38764]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 09 09:41:04 compute-1 sudo[38762]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:05 compute-1 sudo[38914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfxotvzorojjwflpdlkqotrtehzdznsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002865.1373615-864-242966192782165/AnsiballZ_stat.py'
Oct 09 09:41:05 compute-1 sudo[38914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:05 compute-1 python3.9[38917]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:05 compute-1 sudo[38914]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:05.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:05 compute-1 sudo[38993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdwsrtmvqanlluuravxkimyjkuzhhtmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002865.1373615-864-242966192782165/AnsiballZ_file.py'
Oct 09 09:41:05 compute-1 sudo[38993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:05 compute-1 python3.9[38995]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:05 compute-1 sudo[38993]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:05 compute-1 ceph-mon[9795]: pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:06 compute-1 sudo[39145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pveiomrduxiruhdnrfdwjrljkzcrfrwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002866.0199475-900-183977310485973/AnsiballZ_stat.py'
Oct 09 09:41:06 compute-1 sudo[39145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:06 compute-1 python3.9[39147]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:06 compute-1 sudo[39145]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:06 compute-1 sudo[39223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcgwkrpgzkfcyubkqrqcphimxsqtcdqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002866.0199475-900-183977310485973/AnsiballZ_file.py'
Oct 09 09:41:06 compute-1 sudo[39223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:06 compute-1 python3.9[39225]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:06.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:06 compute-1 sudo[39223]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:07 compute-1 sudo[39375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udztxnrykibhnbyrvhrqknpjihqrckgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002866.913672-936-54334732777956/AnsiballZ_stat.py'
Oct 09 09:41:07 compute-1 sudo[39375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:07 compute-1 python3.9[39377]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:07 compute-1 sudo[39375]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:07 compute-1 sudo[39454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvumcoarwoxhlhitkyihiyzexmvlznae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002866.913672-936-54334732777956/AnsiballZ_file.py'
Oct 09 09:41:07 compute-1 sudo[39454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:07.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:07 compute-1 python3.9[39456]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:07 compute-1 sudo[39454]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:07 compute-1 ceph-mon[9795]: pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:07 compute-1 sudo[39606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtrfmjineyflrjonetouumkmbwwfebms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002867.7665558-972-272035900028978/AnsiballZ_stat.py'
Oct 09 09:41:07 compute-1 sudo[39606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:08 compute-1 python3.9[39608]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:08 compute-1 sudo[39606]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:08 compute-1 sudo[39684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aixljijhdnanljuqrejqbuccgfynbmlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002867.7665558-972-272035900028978/AnsiballZ_file.py'
Oct 09 09:41:08 compute-1 sudo[39684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:08 compute-1 python3.9[39686]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:08 compute-1 sudo[39684]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:08 compute-1 sudo[39752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:41:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:08.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:08 compute-1 sudo[39752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:41:08 compute-1 sudo[39752]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:08 compute-1 sudo[39861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-audimaavgzdlyyehfcyajoagwfrjktmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002868.678589-1008-155173260442265/AnsiballZ_stat.py'
Oct 09 09:41:08 compute-1 sudo[39861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:09 compute-1 python3.9[39863]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:09 compute-1 sudo[39861]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:09 compute-1 sudo[39939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejwlfaapqmbghbasehpdrfzfgsnebrgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002868.678589-1008-155173260442265/AnsiballZ_file.py'
Oct 09 09:41:09 compute-1 sudo[39939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:09 compute-1 python3.9[39941]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:09 compute-1 sudo[39939]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:41:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:09.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:41:09 compute-1 sudo[40092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aprycippyvyiimmsmprmebybmniigwfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002869.6803908-1047-113090915648837/AnsiballZ_command.py'
Oct 09 09:41:09 compute-1 sudo[40092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:09 compute-1 ceph-mon[9795]: pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:10 compute-1 python3.9[40094]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:41:10 compute-1 sudo[40092]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:10 compute-1 sudo[40247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aywicctkusojongnefnmyflohranplzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002870.2042265-1071-105718694018567/AnsiballZ_blockinfile.py'
Oct 09 09:41:10 compute-1 sudo[40247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:10 compute-1 python3.9[40249]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:10 compute-1 sudo[40247]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:10.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:11 compute-1 sudo[40399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcurvrjfunacirgnxkzfdaocrwvyaumk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002870.8668945-1098-30073367658983/AnsiballZ_file.py'
Oct 09 09:41:11 compute-1 sudo[40399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:11 compute-1 python3.9[40401]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:11 compute-1 sudo[40399]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:11 compute-1 sudo[40552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suczhyidpatwjzbqjtxfyiokadejfcve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002871.3198874-1098-23503890889872/AnsiballZ_file.py'
Oct 09 09:41:11 compute-1 sudo[40552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:11.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:11 compute-1 python3.9[40554]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:11 compute-1 sudo[40552]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:11 compute-1 ceph-mon[9795]: pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:12 compute-1 sudo[40704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbcalrbsentvdcieqbqjlfzgmqmdwzdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002871.8281376-1143-134594689253251/AnsiballZ_mount.py'
Oct 09 09:41:12 compute-1 sudo[40704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:12 compute-1 python3.9[40706]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 09 09:41:12 compute-1 sudo[40704]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:12 compute-1 sudo[40856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvkhannbtldtlzahipefnkmzwqxkofvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002872.3861675-1143-226189100787299/AnsiballZ_mount.py'
Oct 09 09:41:12 compute-1 sudo[40856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:12 compute-1 python3.9[40858]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 09 09:41:12 compute-1 sudo[40856]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:41:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:12.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:41:13 compute-1 sshd-session[33570]: Connection closed by 192.168.122.30 port 41058
Oct 09 09:41:13 compute-1 sshd-session[33567]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:41:13 compute-1 systemd-logind[798]: Session 27 logged out. Waiting for processes to exit.
Oct 09 09:41:13 compute-1 systemd[1]: session-27.scope: Deactivated successfully.
Oct 09 09:41:13 compute-1 systemd[1]: session-27.scope: Consumed 20.614s CPU time.
Oct 09 09:41:13 compute-1 systemd-logind[798]: Removed session 27.
Oct 09 09:41:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:41:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:13.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:41:13 compute-1 ceph-mon[9795]: pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 09 09:41:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:14.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:41:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:15.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:41:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:15 compute-1 ceph-mon[9795]: pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:16.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:17.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:17 compute-1 sshd-session[40886]: Accepted publickey for zuul from 192.168.122.30 port 52190 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:41:17 compute-1 systemd-logind[798]: New session 28 of user zuul.
Oct 09 09:41:17 compute-1 systemd[1]: Started Session 28 of User zuul.
Oct 09 09:41:17 compute-1 sshd-session[40886]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:41:17 compute-1 ceph-mon[9795]: pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:18 compute-1 sudo[41039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bblbvacimbleheqszbmmzntcwkrdfckn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002877.9029503-19-217910309505187/AnsiballZ_tempfile.py'
Oct 09 09:41:18 compute-1 sudo[41039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:18 compute-1 python3.9[41041]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 09 09:41:18 compute-1 sudo[41039]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:18.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:18 compute-1 sudo[41191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxorcjofxggduolcugoweilnwcfeetjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002878.5414195-55-156152249956641/AnsiballZ_stat.py'
Oct 09 09:41:18 compute-1 sudo[41191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:19 compute-1 python3.9[41193]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:41:19 compute-1 sudo[41191]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:19 compute-1 sudo[41346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odiwzpvvfsjxmdnjjyptvqwsjfgwgxtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002879.1695356-79-147812689084453/AnsiballZ_slurp.py'
Oct 09 09:41:19 compute-1 sudo[41346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:41:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:19.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:41:19 compute-1 python3.9[41348]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 09 09:41:19 compute-1 sudo[41346]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:19 compute-1 ceph-mon[9795]: pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:41:19 compute-1 sudo[41498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhqrpaurojvwenfqkdjircnaelqxfvku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002879.758989-103-7766229410158/AnsiballZ_stat.py'
Oct 09 09:41:19 compute-1 sudo[41498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:20 compute-1 python3.9[41500]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.x64l6g1a follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:20 compute-1 sudo[41498]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:20 compute-1 sudo[41623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoqhtkmtebiicmpwczdrkbavdreafrro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002879.758989-103-7766229410158/AnsiballZ_copy.py'
Oct 09 09:41:20 compute-1 sudo[41623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:20 compute-1 python3.9[41625]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.x64l6g1a mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002879.758989-103-7766229410158/.source.x64l6g1a _original_basename=.b_7yq3z5 follow=False checksum=231ee42d81be70362d898b48675a8dc8dc6887b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:20 compute-1 sudo[41623]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:20.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:21 compute-1 sudo[41775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oziehfltrsrglwjwbzeojuljvlqqihrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002880.8342333-148-243637298827554/AnsiballZ_setup.py'
Oct 09 09:41:21 compute-1 sudo[41775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:21 compute-1 python3.9[41777]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:41:21 compute-1 sudo[41775]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:21.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:21 compute-1 ceph-mon[9795]: pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:22 compute-1 sudo[41928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fknjhjtxiqsiedtgtcrewgajsxnzjzmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002881.7288737-173-12273028645245/AnsiballZ_blockinfile.py'
Oct 09 09:41:22 compute-1 sudo[41928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:22 compute-1 python3.9[41930]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEdAe+aHzafP9dhAtdIAtOm2sC12803SCpA/3rl1ydGqAiReivZh0j/TO2wBzoqsan7nzM7eG4TWSpqK+0ZBgBjrUjB9Cj1eCLSLOLFpIUpLcs70zpiXFEg4VCxifit+r7hVmAjbLpb7lUOEBeuKAC+NijlzOD2XrC+yd3AhBkIuX/kEOqNS457QburXRcER973lXO7bXpB0owCrgGAzOsy1i7FT6Zz4mSB7l2Iy2drh0BXBPs+laJ9chzaIYm3t6/xdGegDzZd9R0R/aKxaO2CGff8by/bJ8Ga/DZNziOBiuIImaU3kBJc76SWraZeoiOMwDTosKuZfFadJWywRHIP1xUSkKdLGnB0MzpGtOhcIWX642g/WIM4+Y078U5nwtvOcNHpA/uT9uRc7nBCEzPpJVHtyVbh0kQ9x86pCj83Ph6ZZ1RPGolhJ6oztdGyl5QMj/rkG45+H83p9c18d5vzsZzrcKaYtBEg3BJ80PfCqFw5Al9hHq/55Yd0D5PiK8=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN+sxaZ1V99vc+E5ar8KEv4Hqy68kJM/buHn1/XxovLr
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDc5CVbyus+PfQGnwFQkfkACIJgIJPRc/fJ1ooz9D/2T/S79sUKftWyZ1JOurJ8lQdLc+LgRGezTzhfuY3R3F6E=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCow+01n6Hl7e4y/xRpTIYbwm1BUam3jmz5ScpeEvosFn7TfszdHV/Do5gTioKon9F6x7Kn2fhkWobIt7rTveNaK0lE2p35tJDQJQ5zYJD3N4aWHdvfaigYEXYaH3OOpmqEhRw/IyxGzW1MS8OfGUNyziUYt99LLYhcEkDneuZnPOI2444OzzU0pYxCtaVSevz9aDR2yi9BWKNIP8iMTNqu9UpE9IaOANEDrZu7gbGMBTDiR1lYzo1peJrtAa/cpTF9DoFnddTbpOMLjd6HaRrnifcc9fP1YtxWn8T1ldTjecUUCp2yo6ycdOUdBiJG9yWw1gI7SXYjeHJbX/1QS6HWd5DWxJFbSf0zP5d5BWyDf5+TFu1/gImUA0HT8WOYb4tm1QH1NAThcRLvtUFg32CcbqOnUyAxW0wDeGoLCW7EERN9OKr11fwlYjdyW/TbqYWRn0J2WhZa4OoZ/C4m9ug6PP7SEo9wXLqN9t4eArVkbeTemzPigVRqNrD2eywEU4k=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCkglmiqZQwqqMItgWA6O04td1K/U4vAgm36NE9rj3U
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLD7v/1C4ThvDcQi8c4DTsjkszkaGHBX0ZNWy5MwKVH3Qt7bVSlXkD8SB3/nhOUlBIzdAK/JQpzVyqfy+61YZMk=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKE7qnQSdbsdsOaGWRokEAHfuZHqF4BkfkIlbsIxi6+FzXfmziMPrsg1PoVUBFOzaP55y6aRtUEaXoCsB+KxPGXhHnh3IdEYTUa5EvJs6/mUlEqIwltt8CLNKUrDV6N38V1v5gaRPIAI5iTwtbap14q+0iDF8MVi8MPKlkqoL/+Z49sJ4HqR31EZpD4cWKso/dkKZQSuVQg+TgJ3bnUKIRYPDS7fjVuZpr0KMyU+v4wjBKXvles8lctvRXdfpY2/33XtBG2af+p/+5mg47b5ylWC3wISLO590WzC4X2T0Pv1a6I9O/Dt3V8xyTfzbqi4ia9/kwNBJg1GGqNBssdedHK3AZDOTSd9U+/C1R9oBDXZ7nSo3hIzMQvrm5DXkthix56gd3x9MrMMzc+wTlFtlm2XwpMg7PtdxMZK++rIfPVxzKXBBQsdDd0W3cbam616N/XERaDJKIUqnPe5sE1qhpaFt8aNtwg+buZpYK5ubLbuJZpASgSC6dIuDsEIk6Af8=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEtxusJG2g5S2RnWLxtcDjdiTuv+VWibld9MVjIgPUzn
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG1pQwHgci56FauRELJKl6O8ntBVH1APLVaVNPCodlG/V+A+h79tYrSqi3QKycc18niRc7Eiq8wWQ8VbX+OhkmY=
                                             create=True mode=0644 path=/tmp/ansible.x64l6g1a state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:22 compute-1 sudo[41928]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:22 compute-1 sudo[42080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbiqjrqpcxtpfnhwvrfuktdimwmpklai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002882.3819408-197-128121353178448/AnsiballZ_command.py'
Oct 09 09:41:22 compute-1 sudo[42080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:22.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:22 compute-1 python3.9[42082]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.x64l6g1a' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:41:22 compute-1 sudo[42080]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:23 compute-1 sudo[42234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oowqdeennnrekpvjlxaumrtumvagxdne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002883.0118608-221-240846636833356/AnsiballZ_file.py'
Oct 09 09:41:23 compute-1 sudo[42234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:23 compute-1 python3.9[42236]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.x64l6g1a state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:23 compute-1 sudo[42234]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:23.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:23 compute-1 sshd-session[40889]: Connection closed by 192.168.122.30 port 52190
Oct 09 09:41:23 compute-1 sshd-session[40886]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:41:23 compute-1 systemd[1]: session-28.scope: Deactivated successfully.
Oct 09 09:41:23 compute-1 systemd[1]: session-28.scope: Consumed 3.502s CPU time.
Oct 09 09:41:23 compute-1 systemd-logind[798]: Session 28 logged out. Waiting for processes to exit.
Oct 09 09:41:23 compute-1 systemd-logind[798]: Removed session 28.
Oct 09 09:41:23 compute-1 ceph-mon[9795]: pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 09 09:41:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:24.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:41:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:25.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:41:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:25 compute-1 ceph-mon[9795]: pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:26.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:27.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:27 compute-1 ceph-mon[9795]: pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:41:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:28.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:41:28 compute-1 sudo[42264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:41:28 compute-1 sudo[42264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:41:28 compute-1 sudo[42264]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:29 compute-1 sshd-session[42289]: Accepted publickey for zuul from 192.168.122.30 port 49364 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:41:29 compute-1 systemd-logind[798]: New session 29 of user zuul.
Oct 09 09:41:29 compute-1 systemd[1]: Started Session 29 of User zuul.
Oct 09 09:41:29 compute-1 sshd-session[42289]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:41:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:29.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:29 compute-1 python3.9[42443]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:41:29 compute-1 ceph-mon[9795]: pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:30 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 09 09:41:30 compute-1 sudo[42599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkadeonzoexgemhdjedrhzfuaernapez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002890.1184638-57-64605871517106/AnsiballZ_systemd.py'
Oct 09 09:41:30 compute-1 sudo[42599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:41:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:30.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:41:30 compute-1 python3.9[42601]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 09 09:41:30 compute-1 sudo[42599]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:31 compute-1 sudo[42753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkibbdhpheylsjelrforguidppcwdjbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002891.0090466-81-93445834866193/AnsiballZ_systemd.py'
Oct 09 09:41:31 compute-1 sudo[42753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:31 compute-1 python3.9[42755]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:41:31 compute-1 sudo[42753]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:31.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:31 compute-1 ceph-mon[9795]: pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:31 compute-1 sudo[42907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxqneqmehgcailiinbbstfcvecxbxsnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002891.6971142-108-253564093044104/AnsiballZ_command.py'
Oct 09 09:41:31 compute-1 sudo[42907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:32 compute-1 python3.9[42909]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:41:32 compute-1 sudo[42907]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:32 compute-1 sudo[43060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iytqqyrmsfjdfprlvwjauatghsdmhghk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002892.3018126-132-52222715771249/AnsiballZ_stat.py'
Oct 09 09:41:32 compute-1 sudo[43060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:32 compute-1 python3.9[43062]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:41:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:32.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:32 compute-1 sudo[43060]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:33 compute-1 sudo[43212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iobecwsejeexzmmlwyrkktverrsdgope ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002892.9994137-159-121139381211927/AnsiballZ_file.py'
Oct 09 09:41:33 compute-1 sudo[43212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:33 compute-1 python3.9[43214]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:33 compute-1 sudo[43212]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:33.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:33 compute-1 sshd-session[42292]: Connection closed by 192.168.122.30 port 49364
Oct 09 09:41:33 compute-1 sshd-session[42289]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:41:33 compute-1 systemd[1]: session-29.scope: Deactivated successfully.
Oct 09 09:41:33 compute-1 systemd[1]: session-29.scope: Consumed 2.716s CPU time.
Oct 09 09:41:33 compute-1 systemd-logind[798]: Session 29 logged out. Waiting for processes to exit.
Oct 09 09:41:33 compute-1 systemd-logind[798]: Removed session 29.
Oct 09 09:41:33 compute-1 ceph-mon[9795]: pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 09 09:41:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:34.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:41:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:35.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:35 compute-1 ceph-mon[9795]: pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:36 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094136 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:41:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:36.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:37.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:38 compute-1 ceph-mon[9795]: pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:38.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:38 compute-1 sshd-session[43242]: Accepted publickey for zuul from 192.168.122.30 port 56992 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:41:38 compute-1 systemd-logind[798]: New session 30 of user zuul.
Oct 09 09:41:38 compute-1 systemd[1]: Started Session 30 of User zuul.
Oct 09 09:41:38 compute-1 sshd-session[43242]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:41:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:39.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:39 compute-1 python3.9[43396]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:41:40 compute-1 ceph-mon[9795]: pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:40 compute-1 sudo[43550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpqydiqsxlvzmhqjkjxxzfndchetyizc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002900.110276-63-60841543144792/AnsiballZ_setup.py'
Oct 09 09:41:40 compute-1 sudo[43550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:40 compute-1 python3.9[43552]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:41:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:40 compute-1 sudo[43550]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:40.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:41 compute-1 sudo[43634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvjuqeshxfzfdnqczbfspgtzgqahkrgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002900.110276-63-60841543144792/AnsiballZ_dnf.py'
Oct 09 09:41:41 compute-1 sudo[43634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:41 compute-1 python3.9[43636]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 09 09:41:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:41.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:42 compute-1 ceph-mon[9795]: pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:42 compute-1 sudo[43634]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:42 compute-1 python3.9[43788]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:41:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:41:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:42.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:41:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:44 compute-1 ceph-mon[9795]: pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 09 09:41:44 compute-1 python3.9[43940]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 09:41:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:41:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:44.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:41:45 compute-1 python3.9[44090]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:41:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:41:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:45.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:41:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:45 compute-1 python3.9[44241]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:41:46 compute-1 ceph-mon[9795]: pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 09 09:41:46 compute-1 sshd-session[43245]: Connection closed by 192.168.122.30 port 56992
Oct 09 09:41:46 compute-1 sshd-session[43242]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:41:46 compute-1 systemd[1]: session-30.scope: Deactivated successfully.
Oct 09 09:41:46 compute-1 systemd[1]: session-30.scope: Consumed 4.180s CPU time.
Oct 09 09:41:46 compute-1 systemd-logind[798]: Session 30 logged out. Waiting for processes to exit.
Oct 09 09:41:46 compute-1 systemd-logind[798]: Removed session 30.
Oct 09 09:41:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:46.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:47.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:48 compute-1 ceph-mon[9795]: pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 09 09:41:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:48.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:48 compute-1 sudo[44267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:41:48 compute-1 sudo[44267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:41:48 compute-1 sudo[44267]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:49.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:50 compute-1 ceph-mon[9795]: pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 09 09:41:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:41:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:50.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:51 compute-1 sshd-session[44293]: Accepted publickey for zuul from 192.168.122.30 port 56312 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:41:51 compute-1 systemd-logind[798]: New session 31 of user zuul.
Oct 09 09:41:51 compute-1 systemd[1]: Started Session 31 of User zuul.
Oct 09 09:41:51 compute-1 sshd-session[44293]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:41:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:51.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:51 compute-1 python3.9[44447]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:41:52 compute-1 ceph-mon[9795]: pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 09 09:41:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:52.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:53 compute-1 sudo[44601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tavbtdeqjuaorrvxgzijpokgxvyxqpzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002912.8202016-110-122091793327208/AnsiballZ_file.py'
Oct 09 09:41:53 compute-1 sudo[44601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:53 compute-1 python3.9[44603]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:41:53 compute-1 sudo[44601]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:53 compute-1 sudo[44754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqjxpbrihtenofaipohosvpirzqasben ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002913.4107802-110-156751606433084/AnsiballZ_file.py'
Oct 09 09:41:53 compute-1 sudo[44754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:53.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:53 compute-1 python3.9[44756]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:41:53 compute-1 sudo[44754]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:54 compute-1 ceph-mon[9795]: pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:41:54 compute-1 sudo[44906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sprjiaobrjrlmpciwoefudkalfoxikeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002913.8946865-155-78738292739409/AnsiballZ_stat.py'
Oct 09 09:41:54 compute-1 sudo[44906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:54 compute-1 python3.9[44908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:54 compute-1 sudo[44906]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:54 compute-1 sudo[45029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvzazpakjwayrwonlmhzceggxjxcvbzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002913.8946865-155-78738292739409/AnsiballZ_copy.py'
Oct 09 09:41:54 compute-1 sudo[45029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:54.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:54 compute-1 python3.9[45031]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002913.8946865-155-78738292739409/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=31b5b94d01ae58766b61e67f4ae5ae5ba2535471 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:54 compute-1 sudo[45029]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:55 compute-1 sudo[45181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bscvbhydeuqoahdmokvhxxjbjokclepu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002915.025151-155-150111778946896/AnsiballZ_stat.py'
Oct 09 09:41:55 compute-1 sudo[45181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:55 compute-1 python3.9[45183]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:55 compute-1 sudo[45181]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:55 compute-1 sudo[45305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdqgdnqkymelzcdagintysmgtdgmccds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002915.025151-155-150111778946896/AnsiballZ_copy.py'
Oct 09 09:41:55 compute-1 sudo[45305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:55.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:55 compute-1 python3.9[45307]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002915.025151-155-150111778946896/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=7fbde074fa214bc5bd2f230fec0e2b862212f741 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:55 compute-1 sudo[45305]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:56 compute-1 sudo[45457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxemmiezloaxheuvhlhvypoxhiwuawci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002915.8521652-155-52260362495654/AnsiballZ_stat.py'
Oct 09 09:41:56 compute-1 sudo[45457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:56 compute-1 ceph-mon[9795]: pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:41:56 compute-1 python3.9[45459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:56 compute-1 sudo[45457]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:56 compute-1 sudo[45580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nslqvhidzlrgoqemzdreckiaejboqema ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002915.8521652-155-52260362495654/AnsiballZ_copy.py'
Oct 09 09:41:56 compute-1 sudo[45580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:56 compute-1 python3.9[45582]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002915.8521652-155-52260362495654/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=fb704c0fb95908366e9ed9140b8909cf655bf6db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:56 compute-1 sudo[45580]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:41:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:56.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:41:56 compute-1 sudo[45732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaisrbglbhbdachhruxpalflfmnanhat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002916.756048-287-23456820686301/AnsiballZ_file.py'
Oct 09 09:41:56 compute-1 sudo[45732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:57 compute-1 python3.9[45734]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:41:57 compute-1 sudo[45732]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:57 compute-1 sudo[45885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cizczbvxxhhnfwzajimzkzxpoldpfhzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002917.2089007-287-116039565232761/AnsiballZ_file.py'
Oct 09 09:41:57 compute-1 sudo[45885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:57 compute-1 python3.9[45887]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:41:57 compute-1 sudo[45885]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:57.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:57 compute-1 sudo[46037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwtjzldpqjedgyaafrbpkeftntublkcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002917.6733348-334-163427769271089/AnsiballZ_stat.py'
Oct 09 09:41:57 compute-1 sudo[46037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:57 compute-1 python3.9[46039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:58 compute-1 sudo[46037]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:58 compute-1 ceph-mon[9795]: pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:41:58 compute-1 sudo[46160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adxbarprryddqxgqrppoxkikefxusyhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002917.6733348-334-163427769271089/AnsiballZ_copy.py'
Oct 09 09:41:58 compute-1 sudo[46160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:58 compute-1 python3.9[46162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002917.6733348-334-163427769271089/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=f365cde10b4ba3f96d84c57378143e4d603806bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:58 compute-1 sudo[46160]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:58 compute-1 sudo[46312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxiwbxiuzbxiwcnyhxdgihubcsrevysw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002918.5032628-334-120523356016562/AnsiballZ_stat.py'
Oct 09 09:41:58 compute-1 sudo[46312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:58.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:58 compute-1 python3.9[46314]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:58 compute-1 sudo[46312]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:59 compute-1 sudo[46435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apspqtkbwszrvogtshyjerzijkqsrgej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002918.5032628-334-120523356016562/AnsiballZ_copy.py'
Oct 09 09:41:59 compute-1 sudo[46435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:59 compute-1 python3.9[46437]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002918.5032628-334-120523356016562/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=40a9a855a5eba48419e934a92216fa818ce139fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:59 compute-1 sudo[46435]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:59 compute-1 sudo[46588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpkgyyswbwzbpbhnlgzsoshsakfcvgiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002919.3126118-334-176926975617038/AnsiballZ_stat.py'
Oct 09 09:41:59 compute-1 sudo[46588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:41:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:59.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:59 compute-1 python3.9[46590]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:59 compute-1 sudo[46588]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:59 compute-1 sudo[46711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxvjojtnkiwnpghsuzeaqxxoliaisxmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002919.3126118-334-176926975617038/AnsiballZ_copy.py'
Oct 09 09:41:59 compute-1 sudo[46711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:00 compute-1 python3.9[46713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002919.3126118-334-176926975617038/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=12c7ec31274cfe83e058e95358eb8d7740905632 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:00 compute-1 sudo[46711]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:00 compute-1 ceph-mon[9795]: pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct 09 09:42:00 compute-1 sudo[46863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evggvfczpoopqodzefirgjjnboqvrqsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002920.2224066-467-35279326193246/AnsiballZ_file.py'
Oct 09 09:42:00 compute-1 sudo[46863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:00 compute-1 python3.9[46865]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:00 compute-1 sudo[46863]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:00.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:00 compute-1 sudo[47015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yraaumfjabxzbrfzhpmgffjhspehdesp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002920.6691663-467-125861600585948/AnsiballZ_file.py'
Oct 09 09:42:00 compute-1 sudo[47015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:01 compute-1 python3.9[47017]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:01 compute-1 sudo[47015]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:01 compute-1 sudo[47167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grrjdddlqhlairvygtjgvnlwiygrltrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002921.143547-511-94901261412329/AnsiballZ_stat.py'
Oct 09 09:42:01 compute-1 sudo[47167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:01 compute-1 python3.9[47169]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:01 compute-1 sudo[47167]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:01.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:01 compute-1 sudo[47291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxradqfdwxmokzmxzrkfslyhadacybnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002921.143547-511-94901261412329/AnsiballZ_copy.py'
Oct 09 09:42:01 compute-1 sudo[47291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:01 compute-1 python3.9[47293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002921.143547-511-94901261412329/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=1158a4e160417c2d76a4d5879579d5453669b3a7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:01 compute-1 sudo[47291]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:02 compute-1 sudo[47393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:42:02 compute-1 sudo[47393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:42:02 compute-1 sudo[47393]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:02 compute-1 ceph-mon[9795]: pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct 09 09:42:02 compute-1 sudo[47442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:42:02 compute-1 sudo[47442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:42:02 compute-1 sudo[47492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbbffgithklbisqecyorpzxbpgsxmqzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002921.9512398-511-23394880291312/AnsiballZ_stat.py'
Oct 09 09:42:02 compute-1 sudo[47492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:02 compute-1 python3.9[47495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:02 compute-1 sudo[47492]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:02 compute-1 sudo[47442]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:02 compute-1 sudo[47645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oytdngvtsezlricsomyjmzfhafiftrcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002921.9512398-511-23394880291312/AnsiballZ_copy.py'
Oct 09 09:42:02 compute-1 sudo[47645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:02 compute-1 python3.9[47647]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002921.9512398-511-23394880291312/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=40a9a855a5eba48419e934a92216fa818ce139fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:02 compute-1 sudo[47645]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:02.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:02 compute-1 sudo[47797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-patoqgsqlbmbgssxbvnuesselwglqxnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002922.7967257-511-156324773257120/AnsiballZ_stat.py'
Oct 09 09:42:02 compute-1 sudo[47797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:03 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:42:03 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:42:03 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:42:03 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:42:03 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:42:03 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:42:03 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:42:03 compute-1 python3.9[47799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:03 compute-1 sudo[47797]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:03 compute-1 sudo[47921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lumbagiiwhxvmedgonscarxwaaijtwfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002922.7967257-511-156324773257120/AnsiballZ_copy.py'
Oct 09 09:42:03 compute-1 sudo[47921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:03 compute-1 python3.9[47923]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002922.7967257-511-156324773257120/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=2f6cee9bc263aba4c5c7fdb1bdebce05af2d6b8b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:03 compute-1 sudo[47921]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:42:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:03.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:42:04 compute-1 ceph-mon[9795]: pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 1 op/s
Oct 09 09:42:04 compute-1 sudo[48073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqklguafkfiuzekvdbchhrquvirilvvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002924.0975506-685-34605653593951/AnsiballZ_file.py'
Oct 09 09:42:04 compute-1 sudo[48073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:04 compute-1 python3.9[48075]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:04 compute-1 sudo[48073]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:04 compute-1 sudo[48225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytsxlmoegzwbpksrbsttnnyghfgcnbyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002924.5537596-710-247727102063491/AnsiballZ_stat.py'
Oct 09 09:42:04 compute-1 sudo[48225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:42:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:04.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:42:04 compute-1 python3.9[48227]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:04 compute-1 sudo[48225]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:42:05 compute-1 sudo[48348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-larsmapdvqaqsuyeaqdneidseefvdqtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002924.5537596-710-247727102063491/AnsiballZ_copy.py'
Oct 09 09:42:05 compute-1 sudo[48348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:05 compute-1 python3.9[48350]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002924.5537596-710-247727102063491/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:05 compute-1 sudo[48348]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:05 compute-1 sudo[48501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ravwcioacbcykoaxwamxkkpldtenryui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002925.4542599-756-279200505942198/AnsiballZ_file.py'
Oct 09 09:42:05 compute-1 sudo[48501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:05.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:05 compute-1 python3.9[48503]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:05 compute-1 sudo[48504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:42:05 compute-1 sudo[48504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:42:05 compute-1 sudo[48504]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:05 compute-1 sudo[48501]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:06 compute-1 sudo[48678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufolduqzyzfybcixfmwhuwkhshvgwzqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002925.9103842-779-137273690439715/AnsiballZ_stat.py'
Oct 09 09:42:06 compute-1 sudo[48678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:06 compute-1 ceph-mon[9795]: pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:42:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:42:06 compute-1 python3.9[48680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:06 compute-1 sudo[48678]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:06 compute-1 sudo[48801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adodsxdxqhxjqgwmqqfralsfmcwysypt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002925.9103842-779-137273690439715/AnsiballZ_copy.py'
Oct 09 09:42:06 compute-1 sudo[48801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:06 compute-1 python3.9[48803]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002925.9103842-779-137273690439715/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:06 compute-1 sudo[48801]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:06.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:06 compute-1 sudo[48953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nksbbnsluwdaekbxhizjdqbwomikqdyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002926.7813327-824-221889456361485/AnsiballZ_file.py'
Oct 09 09:42:06 compute-1 sudo[48953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:07 compute-1 python3.9[48955]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:07 compute-1 sudo[48953]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:07 compute-1 sudo[49106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyugwbeufzqgiiabjemguxxollniyqkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002927.2468543-847-38712297183620/AnsiballZ_stat.py'
Oct 09 09:42:07 compute-1 sudo[49106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:07 compute-1 python3.9[49108]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:07 compute-1 sudo[49106]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:07.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:07 compute-1 sudo[49229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmgdsczdyrxnblxlebodlmuldiytwblh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002927.2468543-847-38712297183620/AnsiballZ_copy.py'
Oct 09 09:42:07 compute-1 sudo[49229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:07 compute-1 python3.9[49231]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002927.2468543-847-38712297183620/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:07 compute-1 sudo[49229]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:08 compute-1 ceph-mon[9795]: pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct 09 09:42:08 compute-1 sudo[49381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uadwdqjpqxwucqrkspjcnfkylfwlyaby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002928.1310444-894-3728430541373/AnsiballZ_file.py'
Oct 09 09:42:08 compute-1 sudo[49381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:08 compute-1 python3.9[49383]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:08 compute-1 sudo[49381]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:08 compute-1 sudo[49533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvcfxixdoovwbfveysmebubwuxobmoyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002928.6040115-918-84245676733233/AnsiballZ_stat.py'
Oct 09 09:42:08 compute-1 sudo[49533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:42:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:08.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:42:08 compute-1 sudo[49536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:42:08 compute-1 sudo[49536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:42:08 compute-1 sudo[49536]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:08 compute-1 python3.9[49535]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:08 compute-1 sudo[49533]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:09 compute-1 sudo[49681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulqvvpbxxvoizuxnfnxkmpwykyaushmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002928.6040115-918-84245676733233/AnsiballZ_copy.py'
Oct 09 09:42:09 compute-1 sudo[49681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:09 compute-1 python3.9[49683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002928.6040115-918-84245676733233/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:09 compute-1 sudo[49681]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:42:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:09.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:42:09 compute-1 sudo[49834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhedvxihftostlfdifaloqwgevubwmuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002929.4976091-966-84083584766063/AnsiballZ_file.py'
Oct 09 09:42:09 compute-1 sudo[49834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:09 compute-1 python3.9[49836]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:09 compute-1 sudo[49834]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:10 compute-1 ceph-mon[9795]: pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:10 compute-1 sudo[49986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfxzrapxrsgfpjdvzuyzodhuxxlxlope ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002929.9691763-990-172796485269759/AnsiballZ_stat.py'
Oct 09 09:42:10 compute-1 sudo[49986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:10 compute-1 python3.9[49988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:10 compute-1 sudo[49986]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:10 compute-1 sudo[50109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znzzxgmkrerjncjxtobfpvmnxvjfbtqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002929.9691763-990-172796485269759/AnsiballZ_copy.py'
Oct 09 09:42:10 compute-1 sudo[50109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:10 compute-1 python3.9[50111]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002929.9691763-990-172796485269759/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:10 compute-1 sudo[50109]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.003000032s ======
Oct 09 09:42:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:10.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000032s
Oct 09 09:42:11 compute-1 sudo[50261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biwclcsdmkgpmwzctmkthoolixwcivgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002930.900959-1036-45616431262116/AnsiballZ_file.py'
Oct 09 09:42:11 compute-1 sudo[50261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:11 compute-1 python3.9[50263]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:11 compute-1 sudo[50261]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:11 compute-1 sudo[50414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmcbwzhqpwnadtugseigkgvsuezruung ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002931.363108-1061-200004042767018/AnsiballZ_stat.py'
Oct 09 09:42:11 compute-1 sudo[50414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:42:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:11.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:42:11 compute-1 python3.9[50416]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:11 compute-1 sudo[50414]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:11 compute-1 sudo[50537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqnxdhwrkmffirelhbaesphrbnudtqfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002931.363108-1061-200004042767018/AnsiballZ_copy.py'
Oct 09 09:42:11 compute-1 sudo[50537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:12 compute-1 python3.9[50539]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002931.363108-1061-200004042767018/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:12 compute-1 sudo[50537]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:12 compute-1 ceph-mon[9795]: pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:42:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:12.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:42:13 compute-1 ceph-mon[9795]: pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:42:13 compute-1 sshd-session[44296]: Connection closed by 192.168.122.30 port 56312
Oct 09 09:42:13 compute-1 sshd-session[44293]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:42:13 compute-1 systemd[1]: session-31.scope: Deactivated successfully.
Oct 09 09:42:13 compute-1 systemd[1]: session-31.scope: Consumed 15.979s CPU time.
Oct 09 09:42:13 compute-1 systemd-logind[798]: Session 31 logged out. Waiting for processes to exit.
Oct 09 09:42:13 compute-1 systemd-logind[798]: Removed session 31.
Oct 09 09:42:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:42:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:13.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:42:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:14.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:15 compute-1 ceph-mon[9795]: pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:42:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:15.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:42:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:16.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:17 compute-1 ceph-mon[9795]: pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct 09 09:42:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:17.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:18 compute-1 sshd-session[50567]: Accepted publickey for zuul from 192.168.122.30 port 42316 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:42:18 compute-1 systemd-logind[798]: New session 32 of user zuul.
Oct 09 09:42:18 compute-1 systemd[1]: Started Session 32 of User zuul.
Oct 09 09:42:18 compute-1 sshd-session[50567]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:42:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:42:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:18.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:42:18 compute-1 sudo[50720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdpjareewlfdlhtvrxmxhbmzmztdxsrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002938.5897288-27-124591292851749/AnsiballZ_file.py'
Oct 09 09:42:18 compute-1 sudo[50720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:19 compute-1 python3.9[50722]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:19 compute-1 sudo[50720]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:19 compute-1 ceph-mon[9795]: pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:19 compute-1 sudo[50873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueudyabuveiuculdqpviirzrzmpzaekn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002939.2689123-63-46834724787835/AnsiballZ_stat.py'
Oct 09 09:42:19 compute-1 sudo[50873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:19.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:19 compute-1 python3.9[50875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:19 compute-1 sudo[50873]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:20 compute-1 sudo[50996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hapyoxlntyyawpfqdrxbtsxvsxyizhfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002939.2689123-63-46834724787835/AnsiballZ_copy.py'
Oct 09 09:42:20 compute-1 sudo[50996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:20 compute-1 python3.9[50998]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002939.2689123-63-46834724787835/.source.conf _original_basename=ceph.conf follow=False checksum=8b7272e0630e6cb598e773121c6b56dda1c87bf8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:20 compute-1 sudo[50996]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:42:20 compute-1 sudo[51148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfypztuboyvndhhorqmyovzsquekkdus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002940.3621464-63-97115789464381/AnsiballZ_stat.py'
Oct 09 09:42:20 compute-1 sudo[51148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:20 compute-1 python3.9[51150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:20 compute-1 sudo[51148]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:42:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:20.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:42:20 compute-1 sudo[51271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlpmtehvhzjyetyqsisgdqncoycfseta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002940.3621464-63-97115789464381/AnsiballZ_copy.py'
Oct 09 09:42:20 compute-1 sudo[51271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:21 compute-1 python3.9[51273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002940.3621464-63-97115789464381/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=f2b8c5d3158b549e18e5631f97d7800b8ceae49e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:21 compute-1 sudo[51271]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:21 compute-1 ceph-mon[9795]: pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:21 compute-1 sshd-session[50570]: Connection closed by 192.168.122.30 port 42316
Oct 09 09:42:21 compute-1 sshd-session[50567]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:42:21 compute-1 systemd-logind[798]: Session 32 logged out. Waiting for processes to exit.
Oct 09 09:42:21 compute-1 systemd[1]: session-32.scope: Deactivated successfully.
Oct 09 09:42:21 compute-1 systemd[1]: session-32.scope: Consumed 1.863s CPU time.
Oct 09 09:42:21 compute-1 systemd-logind[798]: Removed session 32.
Oct 09 09:42:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:21.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:22.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:23 compute-1 ceph-mon[9795]: pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:42:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:23.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:24.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:25 compute-1 ceph-mon[9795]: pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:25.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:26 compute-1 sshd-session[51301]: Accepted publickey for zuul from 192.168.122.30 port 53684 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:42:26 compute-1 systemd-logind[798]: New session 33 of user zuul.
Oct 09 09:42:26 compute-1 systemd[1]: Started Session 33 of User zuul.
Oct 09 09:42:26 compute-1 sshd-session[51301]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:42:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:27 compute-1 python3.9[51454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:42:27 compute-1 ceph-mon[9795]: pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct 09 09:42:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:42:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:27.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:42:28 compute-1 sudo[51609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hifapiimyktnsrkkjgekwijguoyaeiyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002947.6977468-63-41997918034546/AnsiballZ_file.py'
Oct 09 09:42:28 compute-1 sudo[51609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:28 compute-1 python3.9[51611]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:28 compute-1 sudo[51609]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:28 compute-1 sudo[51761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlbnnhkmvxhalfyrrvdppkpymperycxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002948.3328161-63-44498932176081/AnsiballZ_file.py'
Oct 09 09:42:28 compute-1 sudo[51761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:28 compute-1 python3.9[51763]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:28 compute-1 sudo[51761]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:28.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:28 compute-1 sudo[51811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:42:28 compute-1 sudo[51811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:42:28 compute-1 sudo[51811]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094229 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:42:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [ALERT] 281/094229 (4) : backend 'backend' has no server available!
Oct 09 09:42:29 compute-1 python3.9[51938]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:42:29 compute-1 ceph-mon[9795]: pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:29.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:29 compute-1 sudo[52089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngvexfrtimeaicsjpmhriyomtwtbpqwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002949.488227-132-63157043525373/AnsiballZ_seboolean.py'
Oct 09 09:42:29 compute-1 sudo[52089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:29 compute-1 python3.9[52091]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 09 09:42:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:30.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:30 compute-1 sudo[52089]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:31 compute-1 sudo[52251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omwbwlyrlamtpnxylzzbguwqwijisdyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002951.1728747-162-71296113697680/AnsiballZ_setup.py'
Oct 09 09:42:31 compute-1 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=2 res=1
Oct 09 09:42:31 compute-1 sudo[52251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:31 compute-1 ceph-mon[9795]: pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:31 compute-1 python3.9[52253]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:42:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:42:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:31.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:42:31 compute-1 sudo[52251]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:32 compute-1 sudo[52335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjzwoykcuwvwxzgyxmvbkyriucpodjbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002951.1728747-162-71296113697680/AnsiballZ_dnf.py'
Oct 09 09:42:32 compute-1 sudo[52335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:32 compute-1 python3.9[52337]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:42:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:32.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:33 compute-1 sudo[52335]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:33 compute-1 ceph-mon[9795]: pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct 09 09:42:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:33.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:33 compute-1 sudo[52489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjpygledyodvddpldldmbmgyrtpdmabl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002953.364332-198-212599407860862/AnsiballZ_systemd.py'
Oct 09 09:42:33 compute-1 sudo[52489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:34 compute-1 python3.9[52491]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:42:34 compute-1 sudo[52489]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:34 compute-1 sudo[52644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmcbdytmqjpbsnurotniadcbuphfalqg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760002954.2433686-222-247180738982989/AnsiballZ_edpm_nftables_snippet.py'
Oct 09 09:42:34 compute-1 sudo[52644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:34 compute-1 python3[52646]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 09 09:42:34 compute-1 sudo[52644]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:34.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:35 compute-1 sudo[52796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzwysglsoobnzzcdnzrnopmzafyjgenw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002954.9231703-249-198693269692298/AnsiballZ_file.py'
Oct 09 09:42:35 compute-1 sudo[52796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:35 compute-1 python3.9[52798]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:35 compute-1 sudo[52796]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:35 compute-1 ceph-mon[9795]: pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 09 09:42:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:42:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:35.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:35 compute-1 sudo[52949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsdozuytjlzyxablcgigksbtlcwhfwsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002955.4099102-273-3153471323255/AnsiballZ_stat.py'
Oct 09 09:42:35 compute-1 sudo[52949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:35 compute-1 python3.9[52951]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:35 compute-1 sudo[52949]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:36 compute-1 sudo[53027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytyunjxxmkovuvqurgojizrhkozzdlfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002955.4099102-273-3153471323255/AnsiballZ_file.py'
Oct 09 09:42:36 compute-1 sudo[53027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:36 compute-1 python3.9[53029]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:36 compute-1 sudo[53027]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:36 compute-1 sudo[53179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toljzefxmfvbfrzzswcxjkhqoybreape ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002956.3698347-309-257960859151147/AnsiballZ_stat.py'
Oct 09 09:42:36 compute-1 sudo[53179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:36 compute-1 python3.9[53181]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:36 compute-1 sudo[53179]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:36.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:36 compute-1 sudo[53257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewwukfhdkcetjbhfhvawgjrpshjhnvwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002956.3698347-309-257960859151147/AnsiballZ_file.py'
Oct 09 09:42:36 compute-1 sudo[53257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:37 compute-1 python3.9[53259]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.l_3qc9j0 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:37 compute-1 sudo[53257]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:37 compute-1 sudo[53410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uthehklsuvgbfcvyigdnekxemolxubia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002957.2112036-345-45752726040010/AnsiballZ_stat.py'
Oct 09 09:42:37 compute-1 sudo[53410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:37 compute-1 ceph-mon[9795]: pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Oct 09 09:42:37 compute-1 python3.9[53412]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:37 compute-1 sudo[53410]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:37.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:37 compute-1 sudo[53488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwyneekpuwpugqcmtabjlljkqnqqokru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002957.2112036-345-45752726040010/AnsiballZ_file.py'
Oct 09 09:42:37 compute-1 sudo[53488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:37 compute-1 python3.9[53490]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:37 compute-1 sudo[53488]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:38 compute-1 sudo[53640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kahvbktxohjfbxlfxditxjrzqcwqiwks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002958.1380208-384-236930372717373/AnsiballZ_command.py'
Oct 09 09:42:38 compute-1 sudo[53640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:38 compute-1 python3.9[53642]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:42:38 compute-1 sudo[53640]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:38.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:39 compute-1 sudo[53793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbdshrevwzfcjtbzbkmwuobewafjcbmp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760002958.7708552-408-22835755984743/AnsiballZ_edpm_nftables_from_files.py'
Oct 09 09:42:39 compute-1 sudo[53793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:39 compute-1 python3[53795]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 09 09:42:39 compute-1 sudo[53793]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:39 compute-1 ceph-mon[9795]: pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 85 B/s wr, 1 op/s
Oct 09 09:42:39 compute-1 sudo[53946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfdncwhrfssmrgwbdkzorxvsoehouwvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002959.3872094-432-31671337278761/AnsiballZ_stat.py'
Oct 09 09:42:39 compute-1 sudo[53946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:39.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:39 compute-1 python3.9[53948]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:39 compute-1 sudo[53946]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:40 compute-1 sudo[54071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yivogpbmkjmimyhisufxnvdcvotkcywd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002959.3872094-432-31671337278761/AnsiballZ_copy.py'
Oct 09 09:42:40 compute-1 sudo[54071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:40 compute-1 python3.9[54073]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002959.3872094-432-31671337278761/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:40 compute-1 sudo[54071]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:40 compute-1 sudo[54223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isnkreemleirswvohngrjtkahkgbkbwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002960.4959867-477-25198386682782/AnsiballZ_stat.py'
Oct 09 09:42:40 compute-1 sudo[54223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:40.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:40 compute-1 python3.9[54225]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:40 compute-1 sudo[54223]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:41 compute-1 sudo[54348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sknkevqfyxspdulmsjhkvwobktzivavl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002960.4959867-477-25198386682782/AnsiballZ_copy.py'
Oct 09 09:42:41 compute-1 sudo[54348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:41 compute-1 python3.9[54350]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002960.4959867-477-25198386682782/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:41 compute-1 sudo[54348]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:41 compute-1 ceph-mon[9795]: pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 85 B/s wr, 1 op/s
Oct 09 09:42:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:41.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:41 compute-1 sudo[54501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzvmzpcevvmyhhawklgnnqbairxkhysi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002961.6199763-522-248912768142839/AnsiballZ_stat.py'
Oct 09 09:42:41 compute-1 sudo[54501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:41 compute-1 python3.9[54503]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:41 compute-1 sudo[54501]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:42 compute-1 sudo[54626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bciiyoctewlhkwlezlguijchbgmlwwcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002961.6199763-522-248912768142839/AnsiballZ_copy.py'
Oct 09 09:42:42 compute-1 sudo[54626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:42 compute-1 python3.9[54628]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002961.6199763-522-248912768142839/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:42 compute-1 sudo[54626]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:42 compute-1 sudo[54778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvrkpzsmgvskdnsagxvfkarvfqwvxcrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002962.575008-567-153886351382436/AnsiballZ_stat.py'
Oct 09 09:42:42 compute-1 sudo[54778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:42.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:42 compute-1 python3.9[54780]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:42 compute-1 sudo[54778]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:43 compute-1 sudo[54903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omuietfjprrvolcsucaphblbmcczcmud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002962.575008-567-153886351382436/AnsiballZ_copy.py'
Oct 09 09:42:43 compute-1 sudo[54903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:43 compute-1 python3.9[54905]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002962.575008-567-153886351382436/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:43 compute-1 sudo[54903]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:43 compute-1 ceph-mon[9795]: pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Oct 09 09:42:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:43.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:43 compute-1 sudo[55056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rynstjcjzbhsnbcdkymmmvlpmchqzuvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002963.5462565-612-180697205211331/AnsiballZ_stat.py'
Oct 09 09:42:43 compute-1 sudo[55056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:43 compute-1 python3.9[55058]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:43 compute-1 sudo[55056]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:44 compute-1 sudo[55181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnggvrbgigthqkjfxqhgqdfawbfvgdjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002963.5462565-612-180697205211331/AnsiballZ_copy.py'
Oct 09 09:42:44 compute-1 sudo[55181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:44 compute-1 python3.9[55183]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002963.5462565-612-180697205211331/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:44 compute-1 sudo[55181]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:44.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:44 compute-1 sudo[55333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljdjkeskmcbiwapvucjjiciiookpgqwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002964.764201-657-29066882436931/AnsiballZ_file.py'
Oct 09 09:42:44 compute-1 sudo[55333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:45 compute-1 python3.9[55335]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:45 compute-1 sudo[55333]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:45 compute-1 sudo[55486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swqceexqzpoyfixydcfjaeqzkljzotqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002965.272635-681-267744130622667/AnsiballZ_command.py'
Oct 09 09:42:45 compute-1 sudo[55486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:45 compute-1 ceph-mon[9795]: pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 170 B/s wr, 1 op/s
Oct 09 09:42:45 compute-1 python3.9[55488]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:42:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:45 compute-1 sudo[55486]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:45.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:46 compute-1 sudo[55641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fazovdrpaybrwjzbxeybjghjysbprdqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002965.8319883-705-137285453992994/AnsiballZ_blockinfile.py'
Oct 09 09:42:46 compute-1 sudo[55641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:46 compute-1 python3.9[55643]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:46 compute-1 sudo[55641]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:46 compute-1 sudo[55793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyxummlsfdonytiteqibmyqtcombpwda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002966.5815566-732-138437619057132/AnsiballZ_command.py'
Oct 09 09:42:46 compute-1 sudo[55793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:42:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:46.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:42:46 compute-1 python3.9[55795]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:42:46 compute-1 sudo[55793]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:47 compute-1 sudo[55947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrpboxkglwqppzmlxbfnfttkilkqvugy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002967.1280687-756-52657697476021/AnsiballZ_stat.py'
Oct 09 09:42:47 compute-1 sudo[55947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:47 compute-1 python3.9[55949]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:42:47 compute-1 ceph-mon[9795]: pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 170 B/s wr, 2 op/s
Oct 09 09:42:47 compute-1 sudo[55947]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:42:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:47.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:42:47 compute-1 sudo[56101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsonibyzeislxtjytqoxqjbfyudfnpea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002967.7300925-780-91021608037901/AnsiballZ_command.py'
Oct 09 09:42:47 compute-1 sudo[56101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:48 compute-1 python3.9[56103]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:42:48 compute-1 sudo[56101]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:48 compute-1 sudo[56256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoealwwdslhgjutzkqwlbimwjfmhdiwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002968.2692025-804-165169678687282/AnsiballZ_file.py'
Oct 09 09:42:48 compute-1 sudo[56256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:48 compute-1 python3.9[56258]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:48 compute-1 sudo[56256]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:42:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:48.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:42:48 compute-1 sudo[56283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:42:49 compute-1 sudo[56283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:42:49 compute-1 sudo[56283]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:49 compute-1 ceph-mon[9795]: pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Oct 09 09:42:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:42:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:49.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:49 compute-1 python3.9[56434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:42:50 compute-1 sudo[56585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqanlpacujqzqugjgiashyrpuouekvrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002970.3983817-924-205683354631716/AnsiballZ_command.py'
Oct 09 09:42:50 compute-1 sudo[56585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:50 compute-1 python3.9[56587]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:42:50 compute-1 ovs-vsctl[56588]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 09 09:42:50 compute-1 sudo[56585]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:50.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:51 compute-1 sudo[56738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnvjirtqdicsjnjxieeraqyolcorytkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002971.0001307-951-182045744708649/AnsiballZ_command.py'
Oct 09 09:42:51 compute-1 sudo[56738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:51 compute-1 python3.9[56740]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:42:51 compute-1 sudo[56738]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:51 compute-1 ceph-mon[9795]: pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Oct 09 09:42:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:51.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:51 compute-1 sudo[56894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoaowrvunwcdbtqlkscecgmnccqbtrct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002971.5419436-975-96103874362073/AnsiballZ_command.py'
Oct 09 09:42:51 compute-1 sudo[56894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:51 compute-1 python3.9[56896]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:42:51 compute-1 ovs-vsctl[56897]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 09 09:42:51 compute-1 sudo[56894]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:52 compute-1 python3.9[57047]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:42:52 compute-1 sudo[57199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvwrboankttnikmtlizgpfuiwrfpyfxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002972.6170127-1026-276624282830408/AnsiballZ_file.py'
Oct 09 09:42:52 compute-1 sudo[57199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:42:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:52.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:42:52 compute-1 python3.9[57201]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:52 compute-1 sudo[57199]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:53 compute-1 sudo[57351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gryodyikckuhaweagkyogsvcdppswunl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002973.1411548-1050-137654201208017/AnsiballZ_stat.py'
Oct 09 09:42:53 compute-1 sudo[57351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:53 compute-1 python3.9[57353]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:53 compute-1 sudo[57351]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:53 compute-1 ceph-mon[9795]: pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 85 B/s wr, 1 op/s
Oct 09 09:42:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:53.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:53 compute-1 sudo[57430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ladkmqfxolnftfhzeiulexmpomyiqwwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002973.1411548-1050-137654201208017/AnsiballZ_file.py'
Oct 09 09:42:53 compute-1 sudo[57430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:53 compute-1 python3.9[57432]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:54 compute-1 sudo[57430]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:54 compute-1 sudo[57582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blgqunjzilipcjymkvnwvtlydafgnmfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002974.0962803-1050-263159774351280/AnsiballZ_stat.py'
Oct 09 09:42:54 compute-1 sudo[57582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:54 compute-1 python3.9[57584]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:54 compute-1 sudo[57582]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:54 compute-1 sudo[57660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwmaenzucyosginddemscoseajamvkak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002974.0962803-1050-263159774351280/AnsiballZ_file.py'
Oct 09 09:42:54 compute-1 sudo[57660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:54 compute-1 python3.9[57662]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:54 compute-1 sudo[57660]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:42:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:54.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:42:55 compute-1 sudo[57812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrgkwuymkvbabvwqnwvmqcksuobwtwnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002974.9444108-1119-122085546144096/AnsiballZ_file.py'
Oct 09 09:42:55 compute-1 sudo[57812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:55 compute-1 python3.9[57814]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:55 compute-1 sudo[57812]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:55 compute-1 ceph-mon[9795]: pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:42:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:55 compute-1 sudo[57965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svhfaaoorahtibcbafmfqyvekrtagczo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002975.48084-1143-8724404960517/AnsiballZ_stat.py'
Oct 09 09:42:55 compute-1 sudo[57965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:55.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:55 compute-1 python3.9[57967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:55 compute-1 sudo[57965]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:56 compute-1 sudo[58043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwqokrvasgnnyhxzaanpecvwfsjfihin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002975.48084-1143-8724404960517/AnsiballZ_file.py'
Oct 09 09:42:56 compute-1 sudo[58043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:56 compute-1 python3.9[58045]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:56 compute-1 sudo[58043]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:56 compute-1 sudo[58195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuootawpbqjsjalfswvhviklpddilhhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002976.379966-1179-23536399160914/AnsiballZ_stat.py'
Oct 09 09:42:56 compute-1 sudo[58195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:56 compute-1 python3.9[58197]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:56 compute-1 sudo[58195]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:56.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:56 compute-1 sudo[58273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sayfvqybolghwomggxxnwmlglfcfiuki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002976.379966-1179-23536399160914/AnsiballZ_file.py'
Oct 09 09:42:56 compute-1 sudo[58273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:57 compute-1 python3.9[58275]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:57 compute-1 sudo[58273]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:57 compute-1 sudo[58426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijhkbyxmhnoedwurivmslqtrswvcrhyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002977.2246513-1215-54193845403597/AnsiballZ_systemd.py'
Oct 09 09:42:57 compute-1 sudo[58426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:57 compute-1 ceph-mon[9795]: pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:42:57 compute-1 python3.9[58428]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:42:57 compute-1 systemd[1]: Reloading.
Oct 09 09:42:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:57.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:57 compute-1 systemd-rc-local-generator[58450]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:42:57 compute-1 systemd-sysv-generator[58453]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:42:57 compute-1 sudo[58426]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:58 compute-1 sudo[58616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gljsccpnkpbgskqvljwpphsqomvrpgkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002978.0989256-1239-44209000420809/AnsiballZ_stat.py'
Oct 09 09:42:58 compute-1 sudo[58616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:58 compute-1 python3.9[58618]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:58 compute-1 sudo[58616]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:58 compute-1 sudo[58694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgbyeuchqkpkkqcexmmxubavsulbfndh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002978.0989256-1239-44209000420809/AnsiballZ_file.py'
Oct 09 09:42:58 compute-1 sudo[58694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:58 compute-1 python3.9[58696]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:58 compute-1 sudo[58694]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:42:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:58.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:42:59 compute-1 sudo[58846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzrxvwclijwabryullsubvxlorhcvqgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002978.9151778-1275-188891485367888/AnsiballZ_stat.py'
Oct 09 09:42:59 compute-1 sudo[58846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:59 compute-1 python3.9[58848]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:59 compute-1 sudo[58846]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:59 compute-1 sudo[58925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-litmujyrrpfmzjibwysjyptpsojicbnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002978.9151778-1275-188891485367888/AnsiballZ_file.py'
Oct 09 09:42:59 compute-1 sudo[58925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:59 compute-1 python3.9[58927]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:59 compute-1 sudo[58925]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:59 compute-1 ceph-mon[9795]: pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:42:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:42:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:59.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:59 compute-1 sudo[59077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmsonuyuuqqwxjarojpbtlumxcmyioef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002979.7374742-1311-77439927531165/AnsiballZ_systemd.py'
Oct 09 09:42:59 compute-1 sudo[59077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:00 compute-1 python3.9[59079]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:43:00 compute-1 systemd[1]: Reloading.
Oct 09 09:43:00 compute-1 systemd-rc-local-generator[59106]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:43:00 compute-1 systemd-sysv-generator[59109]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:43:00 compute-1 systemd[1]: Starting Create netns directory...
Oct 09 09:43:00 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 09:43:00 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 09:43:00 compute-1 systemd[1]: Finished Create netns directory.
Oct 09 09:43:00 compute-1 sudo[59077]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:43:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:00.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:43:00 compute-1 sudo[59271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptajvouerqregmczmbfggrflyfpugego ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002980.7280033-1341-140221680008972/AnsiballZ_file.py'
Oct 09 09:43:00 compute-1 sudo[59271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:01 compute-1 python3.9[59273]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:01 compute-1 sudo[59271]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:01 compute-1 sudo[59424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnporohcixuvkuboisblrngasknwdbzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002981.2302542-1365-270395360166098/AnsiballZ_stat.py'
Oct 09 09:43:01 compute-1 sudo[59424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:01 compute-1 python3.9[59426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:01 compute-1 sudo[59424]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:01 compute-1 ceph-mon[9795]: pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:01.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:01 compute-1 sudo[59547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zftkufjvytqyymrlgsfhefnllpdzhxuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002981.2302542-1365-270395360166098/AnsiballZ_copy.py'
Oct 09 09:43:01 compute-1 sudo[59547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:01 compute-1 python3.9[59549]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002981.2302542-1365-270395360166098/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:01 compute-1 sudo[59547]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:02 compute-1 sudo[59699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caivjalcrxyphleatxxdmcfrmghexvgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002982.5153167-1416-44416546768566/AnsiballZ_file.py'
Oct 09 09:43:02 compute-1 sudo[59699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:02 compute-1 python3.9[59701]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:02 compute-1 sudo[59699]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:02.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:03 compute-1 sudo[59851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kehfhakqmztahbiikbxjftgjdxidunxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002983.022078-1440-257267231829686/AnsiballZ_stat.py'
Oct 09 09:43:03 compute-1 sudo[59851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:03 compute-1 python3.9[59853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:03 compute-1 sudo[59851]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:03 compute-1 ceph-mon[9795]: pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:03 compute-1 sudo[59975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqjaxsrfnmxjinhjndzfdrbfymjlewfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002983.022078-1440-257267231829686/AnsiballZ_copy.py'
Oct 09 09:43:03 compute-1 sudo[59975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:03.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:03 compute-1 python3.9[59977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002983.022078-1440-257267231829686/.source.json _original_basename=.c4torahg follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:03 compute-1 sudo[59975]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:04 compute-1 sudo[60127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swudavobpbfuqtjylkzzsmfarsmbuubm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002983.9319897-1485-80052653524901/AnsiballZ_file.py'
Oct 09 09:43:04 compute-1 sudo[60127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:04 compute-1 python3.9[60129]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:04 compute-1 sudo[60127]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:43:04 compute-1 sudo[60279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzmrwixgmgkzmcjjfovzkzaaewbksjdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002984.4453068-1509-98122556344372/AnsiballZ_stat.py'
Oct 09 09:43:04 compute-1 sudo[60279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:04 compute-1 sudo[60279]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:04.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:05 compute-1 sudo[60402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldosjwfgqlndykjylvswzrhyohjlkrqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002984.4453068-1509-98122556344372/AnsiballZ_copy.py'
Oct 09 09:43:05 compute-1 sudo[60402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:05 compute-1 sudo[60402]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:05 compute-1 ceph-mon[9795]: pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:05.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:05 compute-1 sudo[60555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdotobcwkydssadtyalpnrljpssfbrkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002985.4508777-1560-150588550055389/AnsiballZ_container_config_data.py'
Oct 09 09:43:05 compute-1 sudo[60555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:05 compute-1 sudo[60558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:43:05 compute-1 sudo[60558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:43:05 compute-1 sudo[60558]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:05 compute-1 python3.9[60557]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 09 09:43:05 compute-1 sudo[60555]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:05 compute-1 sudo[60583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:43:05 compute-1 sudo[60583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:43:06 compute-1 sudo[60583]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:43:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:43:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:43:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:43:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:43:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:43:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:43:06 compute-1 sudo[60786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfydozolparsnndoeqicidptjnclbptz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002986.1425564-1587-55853219469187/AnsiballZ_container_config_hash.py'
Oct 09 09:43:06 compute-1 sudo[60786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:06 compute-1 python3.9[60788]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 09:43:06 compute-1 sudo[60786]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:43:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:06.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:43:07 compute-1 sudo[60938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blzksijhzyjivcpujlgaihuxvqcoyhxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002987.0231643-1614-211252060119721/AnsiballZ_podman_container_info.py'
Oct 09 09:43:07 compute-1 sudo[60938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:07 compute-1 python3.9[60941]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 09 09:43:07 compute-1 sudo[60938]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:07 compute-1 ceph-mon[9795]: pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:43:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:07.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:43:08 compute-1 sudo[61110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymxbwrbdqscokciuwxrtpmyivysdutae ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760002988.345724-1653-258667286083922/AnsiballZ_edpm_container_manage.py'
Oct 09 09:43:08 compute-1 sudo[61110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:43:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:08.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:43:08 compute-1 python3[61112]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 09:43:09 compute-1 sudo[61132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:43:09 compute-1 sudo[61132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:43:09 compute-1 sudo[61132]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:09 compute-1 sudo[61158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:43:09 compute-1 sudo[61158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:43:09 compute-1 sudo[61158]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:09 compute-1 ceph-mon[9795]: pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:43:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:43:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:09.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:10.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.123116) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991123159, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2304, "num_deletes": 250, "total_data_size": 6187562, "memory_usage": 6268800, "flush_reason": "Manual Compaction"}
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991129575, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2417462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10575, "largest_seqno": 12874, "table_properties": {"data_size": 2410776, "index_size": 3500, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17020, "raw_average_key_size": 20, "raw_value_size": 2395934, "raw_average_value_size": 2852, "num_data_blocks": 156, "num_entries": 840, "num_filter_entries": 840, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002785, "oldest_key_time": 1760002785, "file_creation_time": 1760002991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 6495 microseconds, and 4038 cpu microseconds.
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.129617) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2417462 bytes OK
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.129629) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130202) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130222) EVENT_LOG_v1 {"time_micros": 1760002991130218, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130234) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6177364, prev total WAL file size 6177364, number of live WAL files 2.
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.131133) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2360KB)], [21(13MB)]
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991131156, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 16869310, "oldest_snapshot_seqno": -1}
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4398 keys, 14824005 bytes, temperature: kUnknown
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991172429, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14824005, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14789856, "index_size": 22071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 110473, "raw_average_key_size": 25, "raw_value_size": 14704904, "raw_average_value_size": 3343, "num_data_blocks": 954, "num_entries": 4398, "num_filter_entries": 4398, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760002991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.172579) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14824005 bytes
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.172987) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 408.4 rd, 358.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 13.8 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(13.1) write-amplify(6.1) OK, records in: 4819, records dropped: 421 output_compression: NoCompression
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.173002) EVENT_LOG_v1 {"time_micros": 1760002991172995, "job": 10, "event": "compaction_finished", "compaction_time_micros": 41306, "compaction_time_cpu_micros": 20066, "output_level": 6, "num_output_files": 1, "total_output_size": 14824005, "num_input_records": 4819, "num_output_records": 4398, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991173313, "job": 10, "event": "table_file_deletion", "file_number": 23}
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991174872, "job": 10, "event": "table_file_deletion", "file_number": 21}
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.131087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.174891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.174893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.174894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.174895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.174896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:11.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:12 compute-1 ceph-mon[9795]: pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:12.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:13 compute-1 ceph-mon[9795]: pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:13.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:13 compute-1 podman[61123]: 2025-10-09 09:43:13.934444118 +0000 UTC m=+4.951437246 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 09 09:43:14 compute-1 podman[61275]: 2025-10-09 09:43:14.031019155 +0000 UTC m=+0.033290766 container create 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Oct 09 09:43:14 compute-1 podman[61275]: 2025-10-09 09:43:14.013616365 +0000 UTC m=+0.015887986 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 09 09:43:14 compute-1 python3[61112]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 09 09:43:14 compute-1 sudo[61110]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.433923) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994434292, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 290, "num_deletes": 251, "total_data_size": 122966, "memory_usage": 129496, "flush_reason": "Manual Compaction"}
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994435267, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 80942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12879, "largest_seqno": 13164, "table_properties": {"data_size": 79032, "index_size": 138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4703, "raw_average_key_size": 17, "raw_value_size": 75309, "raw_average_value_size": 278, "num_data_blocks": 6, "num_entries": 270, "num_filter_entries": 270, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002992, "oldest_key_time": 1760002992, "file_creation_time": 1760002994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 1368 microseconds, and 526 cpu microseconds.
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435292) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 80942 bytes OK
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435301) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435832) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435843) EVENT_LOG_v1 {"time_micros": 1760002994435840, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435849) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 120817, prev total WAL file size 120817, number of live WAL files 2.
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.436274) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(79KB)], [24(14MB)]
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994436298, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 14904947, "oldest_snapshot_seqno": -1}
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4158 keys, 11558194 bytes, temperature: kUnknown
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994466823, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 11558194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11527322, "index_size": 19370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 106424, "raw_average_key_size": 25, "raw_value_size": 11448227, "raw_average_value_size": 2753, "num_data_blocks": 828, "num_entries": 4158, "num_filter_entries": 4158, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760002994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.467174) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11558194 bytes
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.467646) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 484.3 rd, 375.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 14.1 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(326.9) write-amplify(142.8) OK, records in: 4668, records dropped: 510 output_compression: NoCompression
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.467660) EVENT_LOG_v1 {"time_micros": 1760002994467654, "job": 12, "event": "compaction_finished", "compaction_time_micros": 30779, "compaction_time_cpu_micros": 17096, "output_level": 6, "num_output_files": 1, "total_output_size": 11558194, "num_input_records": 4668, "num_output_records": 4158, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994467915, "job": 12, "event": "table_file_deletion", "file_number": 26}
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994469500, "job": 12, "event": "table_file_deletion", "file_number": 24}
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.436232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.469530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.469534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.469535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.469536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.469538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:14 compute-1 sudo[61453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsbmchkzutzqqjdbdvjyyeoqlgeaubmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002994.429451-1677-218057408778361/AnsiballZ_stat.py'
Oct 09 09:43:14 compute-1 sudo[61453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:14 compute-1 python3.9[61455]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:43:14 compute-1 sudo[61453]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:14.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:15 compute-1 sudo[61607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlngfbijmuxvfbpavlauqnkjrcbxumws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002995.0685432-1704-108333409736258/AnsiballZ_file.py'
Oct 09 09:43:15 compute-1 sudo[61607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:15 compute-1 python3.9[61609]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:15 compute-1 sudo[61607]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:15 compute-1 ceph-mon[9795]: pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:15 compute-1 sudo[61684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qacyoncitwnfysmcqrjyifovklkpolqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002995.0685432-1704-108333409736258/AnsiballZ_stat.py'
Oct 09 09:43:15 compute-1 sudo[61684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:15.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:15 compute-1 python3.9[61686]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:43:15 compute-1 sudo[61684]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:16 compute-1 sudo[61835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utqedharezslhiynawuooqgautbfqtpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002995.7959394-1704-238178989776320/AnsiballZ_copy.py'
Oct 09 09:43:16 compute-1 sudo[61835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:16 compute-1 python3.9[61837]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002995.7959394-1704-238178989776320/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:16 compute-1 sudo[61835]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:16 compute-1 sudo[61911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnmdtdkbymvzijyqwjbyiisbkdgmcjwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002995.7959394-1704-238178989776320/AnsiballZ_systemd.py'
Oct 09 09:43:16 compute-1 sudo[61911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:16 compute-1 python3.9[61913]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:43:16 compute-1 systemd[1]: Reloading.
Oct 09 09:43:16 compute-1 systemd-rc-local-generator[61936]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:43:16 compute-1 systemd-sysv-generator[61939]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:43:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:16.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:16 compute-1 sudo[61911]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:17 compute-1 sudo[62022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agvkovtdourotoqdseugtmgthhwsynsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002995.7959394-1704-238178989776320/AnsiballZ_systemd.py'
Oct 09 09:43:17 compute-1 sudo[62022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:17 compute-1 python3.9[62024]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:43:17 compute-1 systemd[1]: Reloading.
Oct 09 09:43:17 compute-1 ceph-mon[9795]: pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:17 compute-1 systemd-rc-local-generator[62048]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:43:17 compute-1 systemd-sysv-generator[62051]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:43:17 compute-1 systemd[1]: Starting ovn_controller container...
Oct 09 09:43:17 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:43:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da26621943a776b8505fa56f3ae642147bf08deae6a1d60d99cb5dc80cb7ecac/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 09 09:43:17 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e.
Oct 09 09:43:17 compute-1 podman[62068]: 2025-10-09 09:43:17.712487021 +0000 UTC m=+0.073355957 container init 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 09 09:43:17 compute-1 ovn_controller[62080]: + sudo -E kolla_set_configs
Oct 09 09:43:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:17.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:17 compute-1 podman[62068]: 2025-10-09 09:43:17.734529896 +0000 UTC m=+0.095398833 container start 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 09:43:17 compute-1 edpm-start-podman-container[62068]: ovn_controller
Oct 09 09:43:17 compute-1 systemd[1]: Created slice User Slice of UID 0.
Oct 09 09:43:17 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 09 09:43:17 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 09 09:43:17 compute-1 systemd[1]: Starting User Manager for UID 0...
Oct 09 09:43:17 compute-1 systemd[62108]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 09 09:43:17 compute-1 edpm-start-podman-container[62067]: Creating additional drop-in dependency for "ovn_controller" (36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e)
Oct 09 09:43:17 compute-1 systemd[1]: Reloading.
Oct 09 09:43:17 compute-1 podman[62087]: 2025-10-09 09:43:17.828280672 +0000 UTC m=+0.085322312 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:43:17 compute-1 systemd[62108]: Queued start job for default target Main User Target.
Oct 09 09:43:17 compute-1 systemd[62108]: Created slice User Application Slice.
Oct 09 09:43:17 compute-1 systemd[62108]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 09 09:43:17 compute-1 systemd[62108]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 09:43:17 compute-1 systemd[62108]: Reached target Paths.
Oct 09 09:43:17 compute-1 systemd[62108]: Reached target Timers.
Oct 09 09:43:17 compute-1 systemd[62108]: Starting D-Bus User Message Bus Socket...
Oct 09 09:43:17 compute-1 systemd[62108]: Starting Create User's Volatile Files and Directories...
Oct 09 09:43:17 compute-1 systemd-rc-local-generator[62156]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:43:17 compute-1 systemd-sysv-generator[62159]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:43:17 compute-1 systemd[62108]: Finished Create User's Volatile Files and Directories.
Oct 09 09:43:17 compute-1 systemd[62108]: Listening on D-Bus User Message Bus Socket.
Oct 09 09:43:17 compute-1 systemd[62108]: Reached target Sockets.
Oct 09 09:43:17 compute-1 systemd[62108]: Reached target Basic System.
Oct 09 09:43:17 compute-1 systemd[62108]: Reached target Main User Target.
Oct 09 09:43:17 compute-1 systemd[62108]: Startup finished in 110ms.
Oct 09 09:43:18 compute-1 systemd[1]: Started User Manager for UID 0.
Oct 09 09:43:18 compute-1 systemd[1]: Started ovn_controller container.
Oct 09 09:43:18 compute-1 systemd[1]: 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e-4fb4dfcd51813fe6.service: Main process exited, code=exited, status=1/FAILURE
Oct 09 09:43:18 compute-1 systemd[1]: 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e-4fb4dfcd51813fe6.service: Failed with result 'exit-code'.
Oct 09 09:43:18 compute-1 sudo[62022]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:18 compute-1 systemd[1]: Started Session c1 of User root.
Oct 09 09:43:18 compute-1 ovn_controller[62080]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 09:43:18 compute-1 ovn_controller[62080]: INFO:__main__:Validating config file
Oct 09 09:43:18 compute-1 ovn_controller[62080]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 09:43:18 compute-1 ovn_controller[62080]: INFO:__main__:Writing out command to execute
Oct 09 09:43:18 compute-1 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 09 09:43:18 compute-1 ovn_controller[62080]: ++ cat /run_command
Oct 09 09:43:18 compute-1 ovn_controller[62080]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 09 09:43:18 compute-1 ovn_controller[62080]: + ARGS=
Oct 09 09:43:18 compute-1 ovn_controller[62080]: + sudo kolla_copy_cacerts
Oct 09 09:43:18 compute-1 systemd[1]: Started Session c2 of User root.
Oct 09 09:43:18 compute-1 ovn_controller[62080]: + [[ ! -n '' ]]
Oct 09 09:43:18 compute-1 ovn_controller[62080]: + . kolla_extend_start
Oct 09 09:43:18 compute-1 ovn_controller[62080]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 09 09:43:18 compute-1 ovn_controller[62080]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 09 09:43:18 compute-1 ovn_controller[62080]: + umask 0022
Oct 09 09:43:18 compute-1 ovn_controller[62080]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 09 09:43:18 compute-1 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 09 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1541] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 09 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1546] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1555] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 09 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1559] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 09 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1561] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 09 09:43:18 compute-1 kernel: br-int: entered promiscuous mode
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00022|main|INFO|OVS feature set changed, force recompute.
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 09 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 09 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1680] manager: (ovn-c24bec-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 09 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1685] manager: (ovn-fc69d3-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Oct 09 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1688] manager: (ovn-ef2171-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Oct 09 09:43:18 compute-1 kernel: genev_sys_6081: entered promiscuous mode
Oct 09 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1807] device (genev_sys_6081): carrier: link connected
Oct 09 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1809] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Oct 09 09:43:18 compute-1 systemd-udevd[62218]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:43:18 compute-1 systemd-udevd[62214]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:43:18 compute-1 sudo[62342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apvyqmwhuwijxftiiiteznwizrorynhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002998.2108915-1788-166163415322846/AnsiballZ_command.py'
Oct 09 09:43:18 compute-1 sudo[62342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:18 compute-1 python3.9[62344]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:43:18 compute-1 ovs-vsctl[62345]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 09 09:43:18 compute-1 sudo[62342]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:18 compute-1 sudo[62495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aouopnewbezwhrhouyayxiwrufmqhsoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002998.697535-1812-51981139305307/AnsiballZ_command.py'
Oct 09 09:43:18 compute-1 sudo[62495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:43:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:18.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:43:19 compute-1 python3.9[62497]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:43:19 compute-1 ovs-vsctl[62499]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 09 09:43:19 compute-1 sudo[62495]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:19 compute-1 ceph-mon[9795]: pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:19 compute-1 sudo[62651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ociqwdlxjtsarwbopcwlqulhaxrddtbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002999.4138522-1854-152103547469805/AnsiballZ_command.py'
Oct 09 09:43:19 compute-1 sudo[62651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:19.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:19 compute-1 python3.9[62653]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:43:19 compute-1 ovs-vsctl[62654]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 09 09:43:19 compute-1 sudo[62651]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:20 compute-1 sshd-session[51304]: Connection closed by 192.168.122.30 port 53684
Oct 09 09:43:20 compute-1 sshd-session[51301]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:43:20 compute-1 systemd[1]: session-33.scope: Deactivated successfully.
Oct 09 09:43:20 compute-1 systemd[1]: session-33.scope: Consumed 40.888s CPU time.
Oct 09 09:43:20 compute-1 systemd-logind[798]: Session 33 logged out. Waiting for processes to exit.
Oct 09 09:43:20 compute-1 systemd-logind[798]: Removed session 33.
Oct 09 09:43:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:43:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:20.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:21 compute-1 ceph-mon[9795]: pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:21.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:22.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:23 compute-1 ceph-mon[9795]: pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:23.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:24.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:24 compute-1 sshd-session[62681]: Accepted publickey for zuul from 192.168.122.30 port 42278 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:43:24 compute-1 systemd-logind[798]: New session 35 of user zuul.
Oct 09 09:43:24 compute-1 systemd[1]: Started Session 35 of User zuul.
Oct 09 09:43:24 compute-1 sshd-session[62681]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:43:25 compute-1 ceph-mon[9795]: pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:25 compute-1 python3.9[62835]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:43:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:25.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:26 compute-1 sudo[62989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyidamvuozgdjkoxnwzknbrpfrwlkuib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003006.1560032-63-263193935114698/AnsiballZ_file.py'
Oct 09 09:43:26 compute-1 sudo[62989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:26 compute-1 python3.9[62991]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:26 compute-1 sudo[62989]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:26.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:26 compute-1 sudo[63141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqucdvjlqpejdepxfbyzfbgnufptrbqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003006.7529483-63-248920905059777/AnsiballZ_file.py'
Oct 09 09:43:26 compute-1 sudo[63141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:27 compute-1 python3.9[63143]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:27 compute-1 sudo[63141]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:27 compute-1 sudo[63294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlwmaqvfmdxymytrcfjxfvuxjxgemnfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003007.1978016-63-87231616836037/AnsiballZ_file.py'
Oct 09 09:43:27 compute-1 sudo[63294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:27 compute-1 ceph-mon[9795]: pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:27 compute-1 python3.9[63296]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:27 compute-1 sudo[63294]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:27.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:27 compute-1 sudo[63446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqqpwkdnzvfjoqgkfllazuftcnsiutgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003007.6717083-63-27820079135888/AnsiballZ_file.py'
Oct 09 09:43:27 compute-1 sudo[63446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:28 compute-1 python3.9[63448]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:28 compute-1 sudo[63446]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:28 compute-1 systemd[1]: Stopping User Manager for UID 0...
Oct 09 09:43:28 compute-1 systemd[62108]: Activating special unit Exit the Session...
Oct 09 09:43:28 compute-1 systemd[62108]: Stopped target Main User Target.
Oct 09 09:43:28 compute-1 systemd[62108]: Stopped target Basic System.
Oct 09 09:43:28 compute-1 systemd[62108]: Stopped target Paths.
Oct 09 09:43:28 compute-1 systemd[62108]: Stopped target Sockets.
Oct 09 09:43:28 compute-1 systemd[62108]: Stopped target Timers.
Oct 09 09:43:28 compute-1 systemd[62108]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 09:43:28 compute-1 systemd[62108]: Closed D-Bus User Message Bus Socket.
Oct 09 09:43:28 compute-1 systemd[62108]: Stopped Create User's Volatile Files and Directories.
Oct 09 09:43:28 compute-1 systemd[62108]: Removed slice User Application Slice.
Oct 09 09:43:28 compute-1 systemd[62108]: Reached target Shutdown.
Oct 09 09:43:28 compute-1 systemd[62108]: Finished Exit the Session.
Oct 09 09:43:28 compute-1 systemd[62108]: Reached target Exit the Session.
Oct 09 09:43:28 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Oct 09 09:43:28 compute-1 systemd[1]: Stopped User Manager for UID 0.
Oct 09 09:43:28 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 09 09:43:28 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 09 09:43:28 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 09 09:43:28 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 09 09:43:28 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Oct 09 09:43:28 compute-1 sudo[63599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfymdncfbcjqunrtofirtvukeklrhhrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003008.1322696-63-55026984149676/AnsiballZ_file.py'
Oct 09 09:43:28 compute-1 sudo[63599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:28 compute-1 python3.9[63601]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:28 compute-1 sudo[63599]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:28.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:29 compute-1 sudo[63752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:43:29 compute-1 sudo[63752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:43:29 compute-1 sudo[63752]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:29 compute-1 python3.9[63751]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:43:29 compute-1 ceph-mon[9795]: pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:29 compute-1 sudo[63927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkakkigirtcwdesimnszzuvillatuftm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003009.2904935-195-63472283186142/AnsiballZ_seboolean.py'
Oct 09 09:43:29 compute-1 sudo[63927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:29.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:29 compute-1 python3.9[63929]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 09 09:43:30 compute-1 sudo[63927]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:30 compute-1 python3.9[64079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:30.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:31 compute-1 python3.9[64200]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003010.4133956-219-103154044020975/.source follow=False _original_basename=haproxy.j2 checksum=4bca74f6ee0b6450624d22997e2f90c414d58b44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:31 compute-1 ceph-mon[9795]: pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:31.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:31 compute-1 python3.9[64351]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:32 compute-1 python3.9[64472]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003011.5631163-264-280003673628122/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:32 compute-1 sudo[64622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbtwudsmsacbznaughavfqgaeecpbpjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003012.579995-315-191107675522366/AnsiballZ_setup.py'
Oct 09 09:43:32 compute-1 sudo[64622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:32.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:33 compute-1 python3.9[64624]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:43:33 compute-1 sudo[64622]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:33 compute-1 sudo[64707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pszhktxyzvcsobyhimepjktnljtxvvbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003012.579995-315-191107675522366/AnsiballZ_dnf.py'
Oct 09 09:43:33 compute-1 sudo[64707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:33 compute-1 ceph-mon[9795]: pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:33 compute-1 python3.9[64709]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:43:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:33.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:34 compute-1 sudo[64707]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:34.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:35 compute-1 sudo[64860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnkbjsiulbpqayralwwihqcelxdfmcaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003014.7569406-351-61493191660215/AnsiballZ_systemd.py'
Oct 09 09:43:35 compute-1 sudo[64860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:35 compute-1 python3.9[64862]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:43:35 compute-1 sudo[64860]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:35 compute-1 ceph-mon[9795]: pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:43:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:35.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:35 compute-1 python3.9[65016]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:36 compute-1 python3.9[65138]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003015.654522-375-50066069519952/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:36 compute-1 python3.9[65288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:36.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:37 compute-1 python3.9[65409]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003016.5177639-375-6988924721572/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:37 compute-1 ceph-mon[9795]: pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:43:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:37.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:43:38 compute-1 python3.9[65560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:38 compute-1 python3.9[65681]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003017.9636252-507-182668819495583/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:38.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:39 compute-1 python3.9[65831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:39 compute-1 python3.9[65952]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003018.7319849-507-34303956305945/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:39 compute-1 ceph-mon[9795]: pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:39.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:40 compute-1 python3.9[66103]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:43:40 compute-1 sudo[66255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwcywwrpialcvsynnsffffchlaptyaiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003020.265384-621-219889571093708/AnsiballZ_file.py'
Oct 09 09:43:40 compute-1 sudo[66255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:40 compute-1 python3.9[66257]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:40 compute-1 sudo[66255]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:40.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:40 compute-1 sudo[66407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqbsgggnllffsjoiozagbuselcnfguqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003020.783549-645-57438418696281/AnsiballZ_stat.py'
Oct 09 09:43:40 compute-1 sudo[66407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:41 compute-1 python3.9[66409]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:41 compute-1 sudo[66407]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:41 compute-1 sudo[66485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckibotnjpgqijecbqmvbcbgnvnkgzfss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003020.783549-645-57438418696281/AnsiballZ_file.py'
Oct 09 09:43:41 compute-1 sudo[66485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:41 compute-1 python3.9[66487]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:41 compute-1 sudo[66485]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:41 compute-1 ceph-mon[9795]: pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:41 compute-1 sudo[66638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwsjrrklfvkpeymjplozqrudksiqvqyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003021.5837545-645-181199938919906/AnsiballZ_stat.py'
Oct 09 09:43:41 compute-1 sudo[66638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:41.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:41 compute-1 python3.9[66640]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:41 compute-1 sudo[66638]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:42 compute-1 sudo[66716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngdhkpthrbeuynyealxeiobfrarynlzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003021.5837545-645-181199938919906/AnsiballZ_file.py'
Oct 09 09:43:42 compute-1 sudo[66716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:42 compute-1 python3.9[66718]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:42 compute-1 sudo[66716]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:42 compute-1 sudo[66868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krwoxyhagdnbxmcxexfdydpvmtftalxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003022.3982797-714-134598227072411/AnsiballZ_file.py'
Oct 09 09:43:42 compute-1 sudo[66868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:42 compute-1 python3.9[66870]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:42 compute-1 sudo[66868]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:42.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:43 compute-1 sudo[67020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mapmdsttqntruuwduqavfigqmbvawxdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003022.8937287-738-39697049091372/AnsiballZ_stat.py'
Oct 09 09:43:43 compute-1 sudo[67020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:43 compute-1 python3.9[67022]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:43 compute-1 sudo[67020]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:43 compute-1 sudo[67099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqxoffriwkfniaxjlkocmmwlacqpbyvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003022.8937287-738-39697049091372/AnsiballZ_file.py'
Oct 09 09:43:43 compute-1 sudo[67099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:43 compute-1 python3.9[67101]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:43 compute-1 ceph-mon[9795]: pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:43 compute-1 sudo[67099]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:43.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:43 compute-1 sudo[67251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wflrbmmxlxdgijlfrrbvoyfduqanytwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003023.7205086-774-104050848290250/AnsiballZ_stat.py'
Oct 09 09:43:43 compute-1 sudo[67251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:44 compute-1 python3.9[67253]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:44 compute-1 sudo[67251]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:44 compute-1 sudo[67329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmumjfclamngahjxxjvlfnbjuvpepubg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003023.7205086-774-104050848290250/AnsiballZ_file.py'
Oct 09 09:43:44 compute-1 sudo[67329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:44 compute-1 python3.9[67331]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:44 compute-1 sudo[67329]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:44 compute-1 sudo[67481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jglodjzwxqclowgeoinzjfkskydcytox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003024.5576694-810-18116254730669/AnsiballZ_systemd.py'
Oct 09 09:43:44 compute-1 sudo[67481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:44.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:44 compute-1 python3.9[67483]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:43:45 compute-1 systemd[1]: Reloading.
Oct 09 09:43:45 compute-1 systemd-rc-local-generator[67507]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:43:45 compute-1 systemd-sysv-generator[67511]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:43:45 compute-1 sudo[67481]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:45 compute-1 ceph-mon[9795]: pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:45 compute-1 sudo[67671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuftjhbtucxoolagltjyvaywlucdhhyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003025.4084127-834-19347227298694/AnsiballZ_stat.py'
Oct 09 09:43:45 compute-1 sudo[67671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:45 compute-1 python3.9[67673]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:45.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:45 compute-1 sudo[67671]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:45 compute-1 sudo[67749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tntllcruugvrtxqfcambstwecaeuztsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003025.4084127-834-19347227298694/AnsiballZ_file.py'
Oct 09 09:43:45 compute-1 sudo[67749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:46 compute-1 python3.9[67751]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:46 compute-1 sudo[67749]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:46 compute-1 sudo[67901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhkkkvakuuiyowkesqmgrawisjnjdsao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003026.2229888-870-272073313583309/AnsiballZ_stat.py'
Oct 09 09:43:46 compute-1 sudo[67901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:46 compute-1 python3.9[67903]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:46 compute-1 sudo[67901]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:46 compute-1 sudo[67979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pimcbyuawbtauhjrsoeokjlxpanixlcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003026.2229888-870-272073313583309/AnsiballZ_file.py'
Oct 09 09:43:46 compute-1 sudo[67979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:46.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:46 compute-1 python3.9[67981]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:46 compute-1 sudo[67979]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:47 compute-1 sudo[68131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxcmlugizvcclbmnfusodlcggntagcuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003027.0650666-906-158479895057433/AnsiballZ_systemd.py'
Oct 09 09:43:47 compute-1 sudo[68131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:47 compute-1 python3.9[68133]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:43:47 compute-1 systemd[1]: Reloading.
Oct 09 09:43:47 compute-1 systemd-rc-local-generator[68154]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:43:47 compute-1 ceph-mon[9795]: pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:47 compute-1 systemd-sysv-generator[68158]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:43:47 compute-1 systemd[1]: Starting Create netns directory...
Oct 09 09:43:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:47.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:47 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 09:43:47 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 09:43:47 compute-1 systemd[1]: Finished Create netns directory.
Oct 09 09:43:47 compute-1 sudo[68131]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:48 compute-1 sudo[68337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzbtnzcqwqkfzcbgjsudnfaikzcmtsvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003028.038756-936-138756326699637/AnsiballZ_file.py'
Oct 09 09:43:48 compute-1 sudo[68337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:48 compute-1 ovn_controller[62080]: 2025-10-09T09:43:48Z|00025|memory|INFO|16256 kB peak resident set size after 30.1 seconds
Oct 09 09:43:48 compute-1 ovn_controller[62080]: 2025-10-09T09:43:48Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct 09 09:43:48 compute-1 podman[68299]: 2025-10-09 09:43:48.281764771 +0000 UTC m=+0.066003149 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 09:43:48 compute-1 python3.9[68345]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:48 compute-1 sudo[68337]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:48 compute-1 sudo[68501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfbzcomfzeyafxotlstvxtofyipfkwih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003028.5825226-960-219643761002661/AnsiballZ_stat.py'
Oct 09 09:43:48 compute-1 sudo[68501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:48 compute-1 python3.9[68503]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:48.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:48 compute-1 sudo[68501]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:49 compute-1 sudo[68594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:43:49 compute-1 sudo[68594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:43:49 compute-1 sudo[68594]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:49 compute-1 sudo[68649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezsmhlzskwpdxntycrxdbnxcjmyhtuus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003028.5825226-960-219643761002661/AnsiballZ_copy.py'
Oct 09 09:43:49 compute-1 sudo[68649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:49 compute-1 python3.9[68651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003028.5825226-960-219643761002661/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:49 compute-1 sudo[68649]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:49 compute-1 ceph-mon[9795]: pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:43:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:49.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:49 compute-1 sudo[68802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qniglyqhrqfvuokdnjqsndhctbtfvgqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003029.6913683-1011-41905044961981/AnsiballZ_file.py'
Oct 09 09:43:49 compute-1 sudo[68802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:50 compute-1 python3.9[68804]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:50 compute-1 sudo[68802]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:50 compute-1 sudo[68954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxytvknxbftqzxznorvzhdpdflgvcfqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003030.2172585-1035-132523593094209/AnsiballZ_stat.py'
Oct 09 09:43:50 compute-1 sudo[68954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:50 compute-1 python3.9[68956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:50 compute-1 sudo[68954]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:50 compute-1 sudo[69077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctsczsoyrxwrptrfiolhppetseubvndl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003030.2172585-1035-132523593094209/AnsiballZ_copy.py'
Oct 09 09:43:50 compute-1 sudo[69077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:50.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:50 compute-1 python3.9[69079]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003030.2172585-1035-132523593094209/.source.json _original_basename=.tvm77msd follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:50 compute-1 sudo[69077]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:51 compute-1 sudo[69229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edhomamkkitpziucsmztuxyqmttzrkpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003031.1372879-1080-218754237844920/AnsiballZ_file.py'
Oct 09 09:43:51 compute-1 sudo[69229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:51 compute-1 python3.9[69231]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:51 compute-1 sudo[69229]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:51 compute-1 ceph-mon[9795]: pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:51.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:51 compute-1 sudo[69382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhpfenbjteclwfnffmxlhnmboqdjedyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003031.6951802-1104-103552980656436/AnsiballZ_stat.py'
Oct 09 09:43:51 compute-1 sudo[69382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:52 compute-1 sudo[69382]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:52 compute-1 sudo[69505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qecpixwmudcoywtrezppkqxqninfzkfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003031.6951802-1104-103552980656436/AnsiballZ_copy.py'
Oct 09 09:43:52 compute-1 sudo[69505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:52 compute-1 sudo[69505]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:52.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:53 compute-1 sudo[69657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbkjlejcltgczlgxtpowrsqiaqdlastd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003032.7699864-1155-114139855737750/AnsiballZ_container_config_data.py'
Oct 09 09:43:53 compute-1 sudo[69657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:53 compute-1 python3.9[69659]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 09 09:43:53 compute-1 sudo[69657]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:53 compute-1 ceph-mon[9795]: pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:53.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:53 compute-1 sudo[69810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkaalluqddhdrinuvoosrtycktfsnddj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003033.4732344-1182-110870989743653/AnsiballZ_container_config_hash.py'
Oct 09 09:43:53 compute-1 sudo[69810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:53 compute-1 python3.9[69812]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 09:43:53 compute-1 sudo[69810]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:54 compute-1 sudo[69962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arcduvvpijdfkmbssqxubsmzxtlcqugh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003034.1631682-1209-25277850126643/AnsiballZ_podman_container_info.py'
Oct 09 09:43:54 compute-1 sudo[69962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:54 compute-1 python3.9[69964]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 09 09:43:54 compute-1 sudo[69962]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:54.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:55 compute-1 ceph-mon[9795]: pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:43:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:55.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:43:55 compute-1 sudo[70134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axxlwyxvcuqywdiagvzvolokbhcklrvj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760003035.5399184-1248-239519503208434/AnsiballZ_edpm_container_manage.py'
Oct 09 09:43:55 compute-1 sudo[70134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:56 compute-1 python3[70136]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 09:43:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:56.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:57 compute-1 ceph-mon[9795]: pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:57.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:58.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:59 compute-1 ceph-mon[9795]: pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:43:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:59.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:00.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:01 compute-1 ceph-mon[9795]: pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:01.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:44:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:02.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:44:03 compute-1 ceph-mon[9795]: pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:03.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:04 compute-1 podman[70147]: 2025-10-09 09:44:04.132091966 +0000 UTC m=+7.959278431 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 09:44:04 compute-1 podman[70251]: 2025-10-09 09:44:04.224390164 +0000 UTC m=+0.027858156 container create 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:44:04 compute-1 podman[70251]: 2025-10-09 09:44:04.211214581 +0000 UTC m=+0.014682604 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 09:44:04 compute-1 python3[70136]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 09:44:04 compute-1 sudo[70134]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:44:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:44:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:04.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:44:05 compute-1 sudo[70430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlizkpagbuvwcsjgtckodknfdnnqnvkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003045.1783261-1272-80164795361629/AnsiballZ_stat.py'
Oct 09 09:44:05 compute-1 sudo[70430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:05 compute-1 python3.9[70432]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:44:05 compute-1 sudo[70430]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:05 compute-1 ceph-mon[9795]: pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:05.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:05 compute-1 sudo[70584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niopghbovdbvukryrnzyqxoxadblpdqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003045.749341-1299-223125305893592/AnsiballZ_file.py'
Oct 09 09:44:05 compute-1 sudo[70584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:06 compute-1 python3.9[70586]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:06 compute-1 sudo[70584]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:06 compute-1 sudo[70660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkzdyipvijgevdioyszbgzzuyecdxbpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003045.749341-1299-223125305893592/AnsiballZ_stat.py'
Oct 09 09:44:06 compute-1 sudo[70660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:06 compute-1 python3.9[70662]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:44:06 compute-1 sudo[70660]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:06 compute-1 sudo[70811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjyfrpwglwtdpxdxbwclnecvqlqzfqww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003046.4698431-1299-122621289515411/AnsiballZ_copy.py'
Oct 09 09:44:06 compute-1 sudo[70811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:44:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:06.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:44:06 compute-1 python3.9[70813]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003046.4698431-1299-122621289515411/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:07 compute-1 sudo[70811]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:07 compute-1 sudo[70887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzgbzhsfcdxtbnipbzmbwqvlzoxymtav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003046.4698431-1299-122621289515411/AnsiballZ_systemd.py'
Oct 09 09:44:07 compute-1 sudo[70887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:07 compute-1 python3.9[70889]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:44:07 compute-1 systemd[1]: Reloading.
Oct 09 09:44:07 compute-1 systemd-sysv-generator[70913]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:44:07 compute-1 systemd-rc-local-generator[70910]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:44:07 compute-1 sudo[70887]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:07 compute-1 ceph-mon[9795]: pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:07.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:07 compute-1 sudo[70999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnfohvcsshrbefschkedxbntlngwsucw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003046.4698431-1299-122621289515411/AnsiballZ_systemd.py'
Oct 09 09:44:07 compute-1 sudo[70999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:08 compute-1 python3.9[71001]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:08 compute-1 systemd[1]: Reloading.
Oct 09 09:44:08 compute-1 systemd-rc-local-generator[71024]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:44:08 compute-1 systemd-sysv-generator[71030]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:44:08 compute-1 systemd[1]: Starting ovn_metadata_agent container...
Oct 09 09:44:08 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:44:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cec53ae44fdafe8f7dd68392008e8f9d7af64c1680de645755463dd07383fe/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 09 09:44:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cec53ae44fdafe8f7dd68392008e8f9d7af64c1680de645755463dd07383fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 09:44:08 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75.
Oct 09 09:44:08 compute-1 podman[71042]: 2025-10-09 09:44:08.425279174 +0000 UTC m=+0.080152994 container init 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: + sudo -E kolla_set_configs
Oct 09 09:44:08 compute-1 podman[71042]: 2025-10-09 09:44:08.446121063 +0000 UTC m=+0.100994861 container start 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:44:08 compute-1 edpm-start-podman-container[71042]: ovn_metadata_agent
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Validating config file
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Copying service configuration files
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Writing out command to execute
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 09 09:44:08 compute-1 edpm-start-podman-container[71041]: Creating additional drop-in dependency for "ovn_metadata_agent" (5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75)
Oct 09 09:44:08 compute-1 podman[71061]: 2025-10-09 09:44:08.492337509 +0000 UTC m=+0.038912304 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: ++ cat /run_command
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: + CMD=neutron-ovn-metadata-agent
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: + ARGS=
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: + sudo kolla_copy_cacerts
Oct 09 09:44:08 compute-1 systemd[1]: Reloading.
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: + [[ ! -n '' ]]
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: + . kolla_extend_start
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: Running command: 'neutron-ovn-metadata-agent'
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: + umask 0022
Oct 09 09:44:08 compute-1 ovn_metadata_agent[71054]: + exec neutron-ovn-metadata-agent
Oct 09 09:44:08 compute-1 systemd-rc-local-generator[71123]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:44:08 compute-1 systemd-sysv-generator[71127]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:44:08 compute-1 systemd[1]: Started ovn_metadata_agent container.
Oct 09 09:44:08 compute-1 sudo[70999]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:08.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:09 compute-1 sudo[71162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:44:09 compute-1 sudo[71162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:44:09 compute-1 sudo[71162]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:09 compute-1 sshd-session[62684]: Connection closed by 192.168.122.30 port 42278
Oct 09 09:44:09 compute-1 sshd-session[62681]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:44:09 compute-1 systemd[1]: session-35.scope: Deactivated successfully.
Oct 09 09:44:09 compute-1 systemd[1]: session-35.scope: Consumed 40.304s CPU time.
Oct 09 09:44:09 compute-1 systemd-logind[798]: Session 35 logged out. Waiting for processes to exit.
Oct 09 09:44:09 compute-1 systemd-logind[798]: Removed session 35.
Oct 09 09:44:09 compute-1 sudo[71188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:44:09 compute-1 sudo[71188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:44:09 compute-1 sudo[71188]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:09 compute-1 ceph-mon[9795]: pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:09 compute-1 sudo[71213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:44:09 compute-1 sudo[71213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:44:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:09.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.990 71059 INFO neutron.common.config [-] Logging enabled!
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.990 71059 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.990 71059 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.991 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.991 71059 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.991 71059 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.991 71059 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.991 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.991 71059 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.020 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.020 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.020 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.020 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.020 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.020 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.020 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.021 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.021 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.021 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.021 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.021 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.021 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.030 71059 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.030 71059 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.030 71059 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.031 71059 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.031 71059 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.045 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 1479fb1d-afaa-427a-bdce-40294d3573d2 (UUID: 1479fb1d-afaa-427a-bdce-40294d3573d2) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.064 71059 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.064 71059 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.064 71059 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.064 71059 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.066 71059 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.071 71059 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.075 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '1479fb1d-afaa-427a-bdce-40294d3573d2'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], external_ids={}, name=1479fb1d-afaa-427a-bdce-40294d3573d2, nb_cfg_timestamp=1760003006163, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.076 71059 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fcc797b2f40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.076 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.077 71059 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.077 71059 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.077 71059 INFO oslo_service.service [-] Starting 1 workers
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.080 71059 DEBUG oslo_service.service [-] Started child 71254 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.083 71059 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpzcuib4u3/privsep.sock']
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.083 71254 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-889826'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.105 71254 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.106 71254 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.106 71254 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.108 71254 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.113 71254 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.118 71254 INFO eventlet.wsgi.server [-] (71254) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 09 09:44:10 compute-1 sudo[71213]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:10 compute-1 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.617 71059 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.617 71059 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpzcuib4u3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.527 71273 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.530 71273 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.533 71273 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.533 71273 INFO oslo.privsep.daemon [-] privsep daemon running as pid 71273
Oct 09 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.619 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[a479a9b4-ee09-41e3-b706-b58d1a813be6]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:44:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:10.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.019 71273 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.019 71273 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.019 71273 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.457 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[8d36c56d-d298-4aa2-b36e-f2e30154d0ae]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.459 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, column=external_ids, values=({'neutron:ovn-metadata-id': '71d73966-7ab7-5393-ba2c-b8eed7f232a8'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.467 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.503 71059 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 09 09:44:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:11.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:12 compute-1 ceph-mon[9795]: pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:44:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:12.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:44:13 compute-1 sudo[71280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:44:13 compute-1 sudo[71280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:44:13 compute-1 sudo[71280]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:13.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:14 compute-1 ceph-mon[9795]: pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:14 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:44:14 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:44:14 compute-1 sshd-session[71305]: Accepted publickey for zuul from 192.168.122.30 port 41084 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:44:14 compute-1 systemd-logind[798]: New session 36 of user zuul.
Oct 09 09:44:14 compute-1 systemd[1]: Started Session 36 of User zuul.
Oct 09 09:44:14 compute-1 sshd-session[71305]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:44:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:44:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:14.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:44:15 compute-1 python3.9[71458]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:44:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:15.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:16 compute-1 sudo[71613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vflrtruahdxeammbvjnqkipgjtuswbqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003055.7333372-63-184442572971825/AnsiballZ_command.py'
Oct 09 09:44:16 compute-1 sudo[71613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:16 compute-1 ceph-mon[9795]: pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:16 compute-1 python3.9[71615]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:16 compute-1 sudo[71613]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:16.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:17 compute-1 sudo[71774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iogicuttlbpxkehdkaadyonvcvsolnbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003056.5793674-96-95406622080537/AnsiballZ_systemd_service.py'
Oct 09 09:44:17 compute-1 sudo[71774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:17 compute-1 python3.9[71776]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:44:17 compute-1 systemd[1]: Reloading.
Oct 09 09:44:17 compute-1 systemd-rc-local-generator[71799]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:44:17 compute-1 systemd-sysv-generator[71803]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:44:17 compute-1 sudo[71774]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:17.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:18 compute-1 ceph-mon[9795]: pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:44:18 compute-1 python3.9[71962]: ansible-ansible.builtin.service_facts Invoked
Oct 09 09:44:18 compute-1 network[71979]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 09:44:18 compute-1 network[71980]: 'network-scripts' will be removed from distribution in near future.
Oct 09 09:44:18 compute-1 network[71981]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 09:44:18 compute-1 podman[71987]: 2025-10-09 09:44:18.864285228 +0000 UTC m=+0.068218836 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 09:44:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:18.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:19.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:20 compute-1 ceph-mon[9795]: pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:44:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:44:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:20.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:21 compute-1 sudo[72269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lunnngtnewwhqnzacobitogctwsrcylh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003061.4644606-153-92891348482820/AnsiballZ_systemd_service.py'
Oct 09 09:44:21 compute-1 sudo[72269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:44:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:21.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:44:21 compute-1 python3.9[72271]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:21 compute-1 sudo[72269]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:22 compute-1 ceph-mon[9795]: pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:44:22 compute-1 sudo[72422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqlcywdfpqllvrebpylpvuosiyvbsdps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003062.023569-153-209261374856613/AnsiballZ_systemd_service.py'
Oct 09 09:44:22 compute-1 sudo[72422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:22 compute-1 python3.9[72424]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:22 compute-1 sudo[72422]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:22 compute-1 sudo[72575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhgbccqqfcioqauulbmqurjwwfkodjnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003062.5840142-153-267282215339066/AnsiballZ_systemd_service.py'
Oct 09 09:44:22 compute-1 sudo[72575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:22.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:23 compute-1 python3.9[72577]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:23 compute-1 sudo[72575]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:23 compute-1 sudo[72728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdbggrbrvfydukuoywobzfdzwturzhji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003063.141092-153-47309485272501/AnsiballZ_systemd_service.py'
Oct 09 09:44:23 compute-1 sudo[72728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:23 compute-1 python3.9[72730]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:23 compute-1 sudo[72728]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:23.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:23 compute-1 sudo[72882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agioqxnvrawvamvleugenjnjvhyxgxwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003063.6917768-153-8378546334064/AnsiballZ_systemd_service.py'
Oct 09 09:44:23 compute-1 sudo[72882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:24 compute-1 ceph-mon[9795]: pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1 op/s
Oct 09 09:44:24 compute-1 python3.9[72884]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:24 compute-1 sudo[72882]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:24 compute-1 sudo[73035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edsgxdhhqqeanyevgnllknpxynogvnrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003064.2437534-153-113169266868991/AnsiballZ_systemd_service.py'
Oct 09 09:44:24 compute-1 sudo[73035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:24 compute-1 python3.9[73037]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:24 compute-1 sudo[73035]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:24.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:25 compute-1 sudo[73188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwjpiheiveylmufcpexvwwiiepevzlin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003064.8211226-153-18541163865840/AnsiballZ_systemd_service.py'
Oct 09 09:44:25 compute-1 sudo[73188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:25 compute-1 python3.9[73190]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:25 compute-1 sudo[73188]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:25.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:26 compute-1 sudo[73342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfhoviweyjbrbaskbqhxxyrbxqxpmbup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003065.6801336-309-40883287319873/AnsiballZ_file.py'
Oct 09 09:44:26 compute-1 sudo[73342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:26 compute-1 ceph-mon[9795]: pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:26 compute-1 python3.9[73344]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:26 compute-1 sudo[73342]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:26 compute-1 sudo[73494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwzbjlqlzxocirsuopmfvpzdtbxpmuth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003066.2614202-309-141349831081769/AnsiballZ_file.py'
Oct 09 09:44:26 compute-1 sudo[73494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:26 compute-1 python3.9[73496]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:26 compute-1 sudo[73494]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:26 compute-1 sudo[73646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eevrlrevfotaaxcgumdworuksabvklcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003066.6931996-309-261742102949331/AnsiballZ_file.py'
Oct 09 09:44:26 compute-1 sudo[73646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:26.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:27 compute-1 python3.9[73648]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:27 compute-1 sudo[73646]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:27 compute-1 sudo[73798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcyysukfhkyncckeyfhitiqejblhayqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003067.1431043-309-119638198321150/AnsiballZ_file.py'
Oct 09 09:44:27 compute-1 sudo[73798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:27 compute-1 python3.9[73800]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:27 compute-1 sudo[73798]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:27 compute-1 sudo[73951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mycjgeqyrufuoehhkodtgiohlucdwish ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003067.5979977-309-128574207043227/AnsiballZ_file.py'
Oct 09 09:44:27 compute-1 sudo[73951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:27 compute-1 python3.9[73953]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:27 compute-1 sudo[73951]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:28 compute-1 ceph-mon[9795]: pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1 op/s
Oct 09 09:44:28 compute-1 sudo[74103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnugkcrdgnsdjlhdlaagibiksqipkjdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003068.033996-309-204921387091800/AnsiballZ_file.py'
Oct 09 09:44:28 compute-1 sudo[74103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:28 compute-1 python3.9[74105]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:28 compute-1 sudo[74103]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:28 compute-1 sudo[74255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfccnmyeapwwboqhnnxifmsovxhuklie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003068.4842327-309-70578342451218/AnsiballZ_file.py'
Oct 09 09:44:28 compute-1 sudo[74255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:28 compute-1 python3.9[74257]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:28 compute-1 sudo[74255]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:28.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:29 compute-1 sudo[74381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:44:29 compute-1 sudo[74381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:44:29 compute-1 sudo[74381]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:29 compute-1 sudo[74432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raolodeyjptbwzcaydvxqlvyrprewkst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003069.0997071-459-157809161507994/AnsiballZ_file.py'
Oct 09 09:44:29 compute-1 sudo[74432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:29 compute-1 python3.9[74434]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:29 compute-1 sudo[74432]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:29 compute-1 sudo[74585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blaonmklzqmzdzzscnbxoinglgeyqzyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003069.545184-459-59748571368598/AnsiballZ_file.py'
Oct 09 09:44:29 compute-1 sudo[74585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:29.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:29 compute-1 python3.9[74587]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:29 compute-1 sudo[74585]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:30 compute-1 ceph-mon[9795]: pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:44:30 compute-1 sudo[74737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zthhcjoufjponifchwffeakpndbzwoqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003070.0013888-459-71236296363034/AnsiballZ_file.py'
Oct 09 09:44:30 compute-1 sudo[74737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:30 compute-1 python3.9[74739]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:30 compute-1 sudo[74737]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:30 compute-1 sudo[74889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unsgmqufowkodvbiyridebyjercqxodj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003070.4452987-459-27744673702203/AnsiballZ_file.py'
Oct 09 09:44:30 compute-1 sudo[74889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:30 compute-1 python3.9[74891]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:30 compute-1 sudo[74889]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:44:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:30.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:44:31 compute-1 sudo[75041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzyspjgprasdnnogulnjnvaungijvcat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003070.8780332-459-59212820851882/AnsiballZ_file.py'
Oct 09 09:44:31 compute-1 sudo[75041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:31 compute-1 python3.9[75043]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:31 compute-1 sudo[75041]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:31 compute-1 sudo[75194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfiydgzkefluxbnvokbmezozwmhibvjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003071.3188522-459-103142446325823/AnsiballZ_file.py'
Oct 09 09:44:31 compute-1 sudo[75194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:31 compute-1 python3.9[75196]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:31 compute-1 sudo[75194]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:31.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:31 compute-1 sudo[75346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysfomdssdeobsqetkhtdpolulaqfqaxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003071.7540252-459-281004649030676/AnsiballZ_file.py'
Oct 09 09:44:31 compute-1 sudo[75346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:32 compute-1 python3.9[75348]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:32 compute-1 sudo[75346]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:32 compute-1 ceph-mon[9795]: pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:44:32 compute-1 sudo[75498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtljuljdygyhdyixqwphhmdrquqpjgxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003072.6380727-612-103775065231642/AnsiballZ_command.py'
Oct 09 09:44:32 compute-1 sudo[75498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:32.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:32 compute-1 python3.9[75500]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                              systemctl disable --now certmonger.service
                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                            fi
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:33 compute-1 sudo[75498]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:33 compute-1 python3.9[75653]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 09:44:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:33.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:34 compute-1 sudo[75803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgcfcemgvgptgnsjnivileuxfrppzlra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003073.9224896-666-165185365220544/AnsiballZ_systemd_service.py'
Oct 09 09:44:34 compute-1 sudo[75803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:34 compute-1 ceph-mon[9795]: pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:44:34 compute-1 python3.9[75805]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:44:34 compute-1 systemd[1]: Reloading.
Oct 09 09:44:34 compute-1 systemd-rc-local-generator[75828]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:44:34 compute-1 systemd-sysv-generator[75832]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:44:34 compute-1 sudo[75803]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:34 compute-1 sudo[75990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhrufssfpkciatoxrwkxpvchcpkcdibt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003074.744175-690-138276446366834/AnsiballZ_command.py'
Oct 09 09:44:34 compute-1 sudo[75990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:34.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:35 compute-1 python3.9[75992]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:35 compute-1 sudo[75990]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:44:35 compute-1 sudo[76144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpgkcqygyjxsigntrmeuatnckkvldhje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003075.2005403-690-169190831198988/AnsiballZ_command.py'
Oct 09 09:44:35 compute-1 sudo[76144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:35 compute-1 python3.9[76146]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:35 compute-1 sudo[76144]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:35 compute-1 sudo[76297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irgphmrwaqsxdhfiovmysckfhatzvfda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003075.6360252-690-242719453458455/AnsiballZ_command.py'
Oct 09 09:44:35 compute-1 sudo[76297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:35.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:35 compute-1 python3.9[76299]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:35 compute-1 sudo[76297]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:36 compute-1 ceph-mon[9795]: pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:36 compute-1 sudo[76450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhzpbmhwjinucnjiwrzkrbfbrkdugevf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003076.078842-690-162850288033044/AnsiballZ_command.py'
Oct 09 09:44:36 compute-1 sudo[76450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:36 compute-1 python3.9[76452]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:36 compute-1 sudo[76450]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:36 compute-1 sudo[76603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwoxfpkkyipudpypfrtziahkmhkxpzlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003076.561033-690-213267504005575/AnsiballZ_command.py'
Oct 09 09:44:36 compute-1 sudo[76603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:36 compute-1 python3.9[76605]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:36 compute-1 sudo[76603]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:36.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:37 compute-1 ceph-mon[9795]: pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:37 compute-1 sudo[76756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlfivwnydaehbvhpebfevmstvysyvpac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003077.051082-690-132819377069079/AnsiballZ_command.py'
Oct 09 09:44:37 compute-1 sudo[76756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:37 compute-1 python3.9[76758]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:37 compute-1 sudo[76756]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:37 compute-1 sudo[76910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzlcywaabinxwxnerigbrznqrmgecade ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003077.4974246-690-122188900909016/AnsiballZ_command.py'
Oct 09 09:44:37 compute-1 sudo[76910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:37.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:37 compute-1 python3.9[76912]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:37 compute-1 sudo[76910]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:38 compute-1 sudo[77072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tccrsyrbrayqkmgwkfdiuztxdbamwlfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003078.3860555-852-210622062603487/AnsiballZ_getent.py'
Oct 09 09:44:38 compute-1 sudo[77072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:38 compute-1 podman[77037]: 2025-10-09 09:44:38.741192809 +0000 UTC m=+0.040938210 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 09 09:44:38 compute-1 python3.9[77081]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 09 09:44:38 compute-1 sudo[77072]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:38.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:39 compute-1 sudo[77234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcyhibuazasiuljpkmpnsntjroappcgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003079.0692601-876-268917719060251/AnsiballZ_group.py'
Oct 09 09:44:39 compute-1 sudo[77234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:39 compute-1 ceph-mon[9795]: pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:39 compute-1 python3.9[77236]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 09 09:44:39 compute-1 groupadd[77237]: group added to /etc/group: name=libvirt, GID=42473
Oct 09 09:44:39 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:44:39 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:44:39 compute-1 groupadd[77237]: group added to /etc/gshadow: name=libvirt
Oct 09 09:44:39 compute-1 groupadd[77237]: new group: name=libvirt, GID=42473
Oct 09 09:44:39 compute-1 sudo[77234]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:39.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:40 compute-1 sudo[77393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyboyukjvncfsmqamiiekeyvvvbcshpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003079.7621584-900-134570722555795/AnsiballZ_user.py'
Oct 09 09:44:40 compute-1 sudo[77393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:40 compute-1 python3.9[77395]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 09 09:44:40 compute-1 useradd[77397]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 09 09:44:40 compute-1 sudo[77393]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:40 compute-1 sudo[77553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifgupcuxplkfdlgxohbjykcedflvvyfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003080.7596612-933-61758459702955/AnsiballZ_setup.py'
Oct 09 09:44:40 compute-1 sudo[77553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:40.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:41 compute-1 python3.9[77555]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:44:41 compute-1 sudo[77553]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:41 compute-1 ceph-mon[9795]: pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:41 compute-1 sudo[77638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wieobthxpihdcifqnimbqtpjujaibtzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003080.7596612-933-61758459702955/AnsiballZ_dnf.py'
Oct 09 09:44:41 compute-1 sudo[77638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:44:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:41.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:44:41 compute-1 python3.9[77640]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:44:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:42.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:43 compute-1 ceph-mon[9795]: pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:43.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:44.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:45 compute-1 ceph-mon[9795]: pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:45.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:46.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:47 compute-1 ceph-mon[9795]: pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:47.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:48.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:49 compute-1 sudo[77656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:44:49 compute-1 sudo[77656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:44:49 compute-1 sudo[77656]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:49 compute-1 podman[77680]: 2025-10-09 09:44:49.389237997 +0000 UTC m=+0.060234368 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 09 09:44:49 compute-1 ceph-mon[9795]: pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:49.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:44:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:50.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:51 compute-1 ceph-mon[9795]: pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 09:44:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Cumulative writes: 8413 writes, 33K keys, 8413 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                          Cumulative WAL: 8413 writes, 1875 syncs, 4.49 writes per sync, written: 0.02 GB, 0.04 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 8413 writes, 33K keys, 8413 commit groups, 1.0 writes per commit group, ingest: 21.18 MB, 0.04 MB/s
                                          Interval WAL: 8413 writes, 1875 syncs, 4.49 writes per sync, written: 0.02 GB, 0.04 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
                                          
                                          ** Compaction Stats [m-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-0] **
                                          
                                          ** Compaction Stats [m-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-1] **
                                          
                                          ** Compaction Stats [m-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-2] **
                                          
                                          ** Compaction Stats [p-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-0] **
                                          
                                          ** Compaction Stats [p-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-1] **
                                          
                                          ** Compaction Stats [p-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-2] **
                                          
                                          ** Compaction Stats [O-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-0] **
                                          
                                          ** Compaction Stats [O-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-1] **
                                          
                                          ** Compaction Stats [O-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-2] **
                                          
                                          ** Compaction Stats [L] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [L] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [L] **
                                          
                                          ** Compaction Stats [P] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [P] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [P] **
Oct 09 09:44:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:44:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:51.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:44:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:52.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:53 compute-1 ceph-mon[9795]: pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:44:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:53.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:44:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:54.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:55 compute-1 ceph-mon[9795]: pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000014s ======
Oct 09 09:44:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:55.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct 09 09:44:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:57.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:57 compute-1 ceph-mon[9795]: pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:57.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:59.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:59 compute-1 ceph-mon[9795]: pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:44:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:59.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:45:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:01.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:45:01 compute-1 ceph-mon[9795]: pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:01.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:03.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:03 compute-1 ceph-mon[9795]: pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:03.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:45:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:05.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:45:05 compute-1 ceph-mon[9795]: pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:45:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:45:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:05.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:45:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:07.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:07 compute-1 ceph-mon[9795]: pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:45:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:07.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:45:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:09.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:09 compute-1 sudo[77715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:45:09 compute-1 sudo[77715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:09 compute-1 sudo[77715]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:09 compute-1 podman[77740]: 2025-10-09 09:45:09.423757962 +0000 UTC m=+0.033515999 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:45:09 compute-1 ceph-mon[9795]: pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:09.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:45:10.024 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:45:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:45:10.025 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:45:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:45:10.025 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:45:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:11.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:11 compute-1 ceph-mon[9795]: pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000016s ======
Oct 09 09:45:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:11.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct 09 09:45:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:13.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:13 compute-1 ceph-mon[9795]: pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:45:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:13.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:45:13 compute-1 sudo[77759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:45:13 compute-1 sudo[77759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:13 compute-1 sudo[77759]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:13 compute-1 sudo[77784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 09 09:45:13 compute-1 sudo[77784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:14 compute-1 sudo[77784]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:14 compute-1 sudo[77827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:45:14 compute-1 sudo[77827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:14 compute-1 sudo[77827]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:14 compute-1 sudo[77852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:45:14 compute-1 sudo[77852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:14 compute-1 sudo[77852]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:15.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:15 compute-1 ceph-mon[9795]: pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:45:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 09:45:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Cumulative writes: 2433 writes, 14K keys, 2433 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s
                                          Cumulative WAL: 2433 writes, 2433 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 2433 writes, 14K keys, 2433 commit groups, 1.0 writes per commit group, ingest: 38.79 MB, 0.06 MB/s
                                          Interval WAL: 2433 writes, 2433 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    456.2      0.05              0.03         6    0.008       0      0       0.0       0.0
                                            L6      1/0   11.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.0    472.3    408.0      0.16              0.09         5    0.031     19K   2240       0.0       0.0
                                           Sum      1/0   11.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0    364.3    419.0      0.20              0.12        11    0.018     19K   2240       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0    365.8    420.6      0.20              0.12        10    0.020     19K   2240       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    472.3    408.0      0.16              0.09         5    0.031     19K   2240       0.0       0.0
                                          High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    464.0      0.05              0.03         5    0.009       0      0       0.0       0.0
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.021, interval 0.021
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds
                                          Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x55e4b55c29b0#2 capacity: 304.00 MB usage: 2.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.5e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(169,2.09 MB,0.688272%) FilterBlock(11,66.42 KB,0.0213372%) IndexBlock(11,134.28 KB,0.0431362%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
Oct 09 09:45:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:15.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:17.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:17 compute-1 ceph-mon[9795]: pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:17.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:18 compute-1 sudo[78055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:45:18 compute-1 sudo[78055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:18 compute-1 sudo[78055]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:19.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:19 compute-1 podman[78107]: 2025-10-09 09:45:19.571620708 +0000 UTC m=+0.083185523 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 09 09:45:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:19.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:20 compute-1 ceph-mon[9795]: pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:45:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:21.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:21.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:22 compute-1 ceph-mon[9795]: pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:23.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:23.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:24 compute-1 ceph-mon[9795]: pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:25.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:25.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:26 compute-1 ceph-mon[9795]: pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:27.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:27.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:28 compute-1 ceph-mon[9795]: pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:29.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:29 compute-1 kernel: SELinux:  Converting 469 SID table entries...
Oct 09 09:45:29 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 09 09:45:29 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 09 09:45:29 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 09 09:45:29 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 09 09:45:29 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 09 09:45:29 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 09 09:45:29 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 09 09:45:29 compute-1 sudo[78153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:45:29 compute-1 sudo[78153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:29 compute-1 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=3 res=1
Oct 09 09:45:29 compute-1 sudo[78153]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000016s ======
Oct 09 09:45:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:29.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct 09 09:45:30 compute-1 ceph-mon[9795]: pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:31.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000016s ======
Oct 09 09:45:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:31.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct 09 09:45:32 compute-1 ceph-mon[9795]: pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:33.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000016s ======
Oct 09 09:45:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:33.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct 09 09:45:34 compute-1 ceph-mon[9795]: pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:35.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:45:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:35.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:36 compute-1 ceph-mon[9795]: pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:36 compute-1 kernel: SELinux:  Converting 469 SID table entries...
Oct 09 09:45:36 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 09 09:45:36 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 09 09:45:36 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 09 09:45:36 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 09 09:45:36 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 09 09:45:36 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 09 09:45:36 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 09 09:45:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:45:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:37.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:45:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:45:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:37.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:45:38 compute-1 ceph-mon[9795]: pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:39.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:39 compute-1 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Oct 09 09:45:39 compute-1 podman[78194]: 2025-10-09 09:45:39.531220575 +0000 UTC m=+0.038324409 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:45:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:45:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:39.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:45:40 compute-1 ceph-mon[9795]: pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:41.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:41.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:42 compute-1 ceph-mon[9795]: pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:43.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:43.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:44 compute-1 ceph-mon[9795]: pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:45.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:45 compute-1 ceph-mon[9795]: pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:45.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:47.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:47 compute-1 ceph-mon[9795]: pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:45:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:47.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:45:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:49.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:49 compute-1 ceph-mon[9795]: pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:49 compute-1 sudo[82036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:45:49 compute-1 sudo[82036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:49 compute-1 sudo[82036]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000016s ======
Oct 09 09:45:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:49.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct 09 09:45:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:45:50 compute-1 podman[83058]: 2025-10-09 09:45:50.546987997 +0000 UTC m=+0.057923089 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 09 09:45:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:51.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:51 compute-1 ceph-mon[9795]: pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:53.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:53 compute-1 ceph-mon[9795]: pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:45:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:53.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:45:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:55.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:55 compute-1 ceph-mon[9795]: pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:55.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:57.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:57 compute-1 ceph-mon[9795]: pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:57.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:59.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:59 compute-1 ceph-mon[9795]: pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:45:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:45:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:59.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:46:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:01.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:01 compute-1 ceph-mon[9795]: pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:46:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:01.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:46:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:03.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:03 compute-1 ceph-mon[9795]: pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:46:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:03.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:46:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:05.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:05 compute-1 ceph-mon[9795]: pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:05.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:07.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:07 compute-1 ceph-mon[9795]: pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:46:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:07.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:09.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:09 compute-1 sudo[95022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:46:09 compute-1 sudo[95022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:46:09 compute-1 sudo[95022]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:09 compute-1 podman[95047]: 2025-10-09 09:46:09.594600806 +0000 UTC m=+0.037092129 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 09 09:46:09 compute-1 ceph-mon[9795]: pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:09.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:46:10.025 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:46:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:46:10.025 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:46:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:46:10.025 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:46:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:11.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:11 compute-1 ceph-mon[9795]: pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:11 compute-1 kernel: SELinux:  Converting 470 SID table entries...
Oct 09 09:46:11 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 09 09:46:11 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 09 09:46:11 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 09 09:46:11 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 09 09:46:11 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 09 09:46:11 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 09 09:46:11 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 09 09:46:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:11.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:12 compute-1 groupadd[95088]: group added to /etc/group: name=dnsmasq, GID=992
Oct 09 09:46:12 compute-1 groupadd[95088]: group added to /etc/gshadow: name=dnsmasq
Oct 09 09:46:12 compute-1 groupadd[95088]: new group: name=dnsmasq, GID=992
Oct 09 09:46:12 compute-1 useradd[95095]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 09 09:46:12 compute-1 dbus-broker-launch[789]: Noticed file-system modification, trigger reload.
Oct 09 09:46:12 compute-1 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=5 res=1
Oct 09 09:46:12 compute-1 dbus-broker-launch[789]: Noticed file-system modification, trigger reload.
Oct 09 09:46:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:13.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:13 compute-1 groupadd[95108]: group added to /etc/group: name=clevis, GID=991
Oct 09 09:46:13 compute-1 groupadd[95108]: group added to /etc/gshadow: name=clevis
Oct 09 09:46:13 compute-1 groupadd[95108]: new group: name=clevis, GID=991
Oct 09 09:46:13 compute-1 useradd[95115]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 09 09:46:13 compute-1 usermod[95125]: add 'clevis' to group 'tss'
Oct 09 09:46:13 compute-1 usermod[95125]: add 'clevis' to shadow group 'tss'
Oct 09 09:46:13 compute-1 ceph-mon[9795]: pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:46:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:46:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:13.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:46:14 compute-1 polkitd[1120]: Reloading rules
Oct 09 09:46:14 compute-1 polkitd[1120]: Collecting garbage unconditionally...
Oct 09 09:46:14 compute-1 polkitd[1120]: Loading rules from directory /etc/polkit-1/rules.d
Oct 09 09:46:14 compute-1 polkitd[1120]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 09 09:46:14 compute-1 polkitd[1120]: Finished loading, compiling and executing 4 rules
Oct 09 09:46:14 compute-1 polkitd[1120]: Reloading rules
Oct 09 09:46:14 compute-1 polkitd[1120]: Collecting garbage unconditionally...
Oct 09 09:46:14 compute-1 polkitd[1120]: Loading rules from directory /etc/polkit-1/rules.d
Oct 09 09:46:14 compute-1 polkitd[1120]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 09 09:46:14 compute-1 polkitd[1120]: Finished loading, compiling and executing 4 rules
Oct 09 09:46:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:46:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:15.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:46:15 compute-1 groupadd[95314]: group added to /etc/group: name=ceph, GID=167
Oct 09 09:46:15 compute-1 groupadd[95314]: group added to /etc/gshadow: name=ceph
Oct 09 09:46:15 compute-1 groupadd[95314]: new group: name=ceph, GID=167
Oct 09 09:46:15 compute-1 useradd[95320]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 09 09:46:15 compute-1 ceph-mon[9795]: pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:15.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:17.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:17 compute-1 ceph-mon[9795]: pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:46:17 compute-1 systemd[1]: Stopping OpenSSH server daemon...
Oct 09 09:46:17 compute-1 sshd[1242]: Received signal 15; terminating.
Oct 09 09:46:17 compute-1 systemd[1]: sshd.service: Deactivated successfully.
Oct 09 09:46:17 compute-1 systemd[1]: Stopped OpenSSH server daemon.
Oct 09 09:46:17 compute-1 systemd[1]: sshd.service: Consumed 840ms CPU time, read 2.7M from disk, written 0B to disk.
Oct 09 09:46:17 compute-1 systemd[1]: Stopped target sshd-keygen.target.
Oct 09 09:46:17 compute-1 systemd[1]: Stopping sshd-keygen.target...
Oct 09 09:46:17 compute-1 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 09:46:17 compute-1 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 09:46:17 compute-1 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 09:46:17 compute-1 systemd[1]: Reached target sshd-keygen.target.
Oct 09 09:46:17 compute-1 systemd[1]: Starting OpenSSH server daemon...
Oct 09 09:46:17 compute-1 sshd[95940]: Server listening on 0.0.0.0 port 22.
Oct 09 09:46:17 compute-1 sshd[95940]: Server listening on :: port 22.
Oct 09 09:46:17 compute-1 systemd[1]: Started OpenSSH server daemon.
Oct 09 09:46:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:17.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:18 compute-1 sudo[96038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:46:18 compute-1 sudo[96038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:46:18 compute-1 sudo[96038]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:18 compute-1 sudo[96074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:46:18 compute-1 sudo[96074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:46:18 compute-1 sudo[96074]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:18 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 09 09:46:18 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 09 09:46:18 compute-1 systemd[1]: Reloading.
Oct 09 09:46:19 compute-1 systemd-sysv-generator[96273]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:19 compute-1 systemd-rc-local-generator[96268]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:19.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:19 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 09 09:46:19 compute-1 systemd[1]: Starting PackageKit Daemon...
Oct 09 09:46:19 compute-1 PackageKit[96939]: daemon start
Oct 09 09:46:19 compute-1 ceph-mon[9795]: pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:46:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:46:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:46:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:46:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:46:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:46:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:46:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:46:19 compute-1 systemd[1]: Started PackageKit Daemon.
Oct 09 09:46:19 compute-1 sudo[77638]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:19.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:20 compute-1 sudo[98683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usfpucnfwzauzlztbppmhsgacafefdcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003180.062047-969-224965063741846/AnsiballZ_systemd.py'
Oct 09 09:46:20 compute-1 sudo[98683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:20 compute-1 podman[98711]: 2025-10-09 09:46:20.666252451 +0000 UTC m=+0.094822214 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 09 09:46:20 compute-1 python3.9[98723]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:46:20 compute-1 systemd[1]: Reloading.
Oct 09 09:46:20 compute-1 systemd-rc-local-generator[99328]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:20 compute-1 systemd-sysv-generator[99332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:21.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:21 compute-1 sudo[98683]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:21 compute-1 sudo[100241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aouppekmuhcpkurfzchlkboqjfbwfeuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003181.2114608-969-99532096479696/AnsiballZ_systemd.py'
Oct 09 09:46:21 compute-1 sudo[100241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:21 compute-1 python3.9[100268]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:46:21 compute-1 ceph-mon[9795]: pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:21 compute-1 systemd[1]: Reloading.
Oct 09 09:46:21 compute-1 systemd-rc-local-generator[100772]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:21 compute-1 systemd-sysv-generator[100777]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:21 compute-1 sudo[100241]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:21.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:22 compute-1 sudo[101525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rczeefrkgswxrbegxuvmsvfiwasdjdbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003182.0531404-969-47352060109412/AnsiballZ_systemd.py'
Oct 09 09:46:22 compute-1 sudo[101525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:22 compute-1 python3.9[101546]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:46:22 compute-1 systemd[1]: Reloading.
Oct 09 09:46:22 compute-1 systemd-sysv-generator[102034]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:22 compute-1 systemd-rc-local-generator[102028]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:22 compute-1 sudo[101525]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:22 compute-1 sudo[102561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:46:22 compute-1 sudo[102561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:46:22 compute-1 sudo[102561]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:23.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:23 compute-1 sudo[102841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixfsilaamjqbpdibunjfifkmwkpqyewv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003182.8969026-969-255825874789684/AnsiballZ_systemd.py'
Oct 09 09:46:23 compute-1 sudo[102841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:23 compute-1 python3.9[102866]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:46:23 compute-1 systemd[1]: Reloading.
Oct 09 09:46:23 compute-1 systemd-rc-local-generator[103372]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:23 compute-1 systemd-sysv-generator[103378]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:23 compute-1 sudo[102841]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:23 compute-1 ceph-mon[9795]: pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 165 op/s
Oct 09 09:46:23 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:46:23 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:46:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:46:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:23.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:46:24 compute-1 sudo[105139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eccjhqhtblthnujsysrmsvrymogrgwbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003184.3528833-1056-239399711108153/AnsiballZ_systemd.py'
Oct 09 09:46:24 compute-1 sudo[105139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:24 compute-1 python3.9[105159]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:24 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 09 09:46:24 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 09 09:46:24 compute-1 systemd[1]: man-db-cache-update.service: Consumed 7.108s CPU time.
Oct 09 09:46:24 compute-1 systemd[1]: run-r4dcd61a2936c4a27a00885585635cc71.service: Deactivated successfully.
Oct 09 09:46:24 compute-1 systemd[1]: Reloading.
Oct 09 09:46:24 compute-1 systemd-sysv-generator[105521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:24 compute-1 systemd-rc-local-generator[105514]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:25.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:25 compute-1 sudo[105139]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:25 compute-1 sudo[105677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grdwuesndzsegncsdajkpewkxysvfcba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003185.217561-1056-116197075527032/AnsiballZ_systemd.py'
Oct 09 09:46:25 compute-1 sudo[105677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:25 compute-1 python3.9[105679]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:25 compute-1 systemd[1]: Reloading.
Oct 09 09:46:25 compute-1 systemd-rc-local-generator[105703]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:25 compute-1 systemd-sysv-generator[105706]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:25 compute-1 ceph-mon[9795]: pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 0 B/s wr, 164 op/s
Oct 09 09:46:25 compute-1 sudo[105677]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:25.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:26 compute-1 sudo[105867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flfhtuzqvuqyizfklsnfgxetpdlpvgjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003186.0400424-1056-153216953837325/AnsiballZ_systemd.py'
Oct 09 09:46:26 compute-1 sudo[105867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:26 compute-1 python3.9[105869]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:26 compute-1 systemd[1]: Reloading.
Oct 09 09:46:26 compute-1 systemd-rc-local-generator[105893]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:26 compute-1 systemd-sysv-generator[105898]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:26 compute-1 sudo[105867]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:27.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:27 compute-1 sudo[106057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sveddwytzjnltsbieppykyzlvkfntoly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003187.0528314-1056-105177796533972/AnsiballZ_systemd.py'
Oct 09 09:46:27 compute-1 sudo[106057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:27 compute-1 python3.9[106059]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:27 compute-1 sudo[106057]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:27 compute-1 sudo[106213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfakwyctgawwljqdlppgqabymoisgqae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003187.6333861-1056-36270435470936/AnsiballZ_systemd.py'
Oct 09 09:46:27 compute-1 sudo[106213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:27 compute-1 ceph-mon[9795]: pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 165 op/s
Oct 09 09:46:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:28.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:28 compute-1 python3.9[106215]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:28 compute-1 systemd[1]: Reloading.
Oct 09 09:46:28 compute-1 systemd-rc-local-generator[106239]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:28 compute-1 systemd-sysv-generator[106242]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:28 compute-1 sudo[106213]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:29.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:29 compute-1 sudo[106403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjyapirprxqwjzgpldatjsokbtvmpwxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003188.9470778-1164-189763219897932/AnsiballZ_systemd.py'
Oct 09 09:46:29 compute-1 sudo[106403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:29 compute-1 python3.9[106405]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:46:29 compute-1 systemd[1]: Reloading.
Oct 09 09:46:29 compute-1 systemd-rc-local-generator[106429]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:29 compute-1 systemd-sysv-generator[106432]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:29 compute-1 sudo[106445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:46:29 compute-1 sudo[106445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:46:29 compute-1 sudo[106445]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:29 compute-1 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 09 09:46:29 compute-1 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 09 09:46:29 compute-1 sudo[106403]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:29 compute-1 ceph-mon[9795]: pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 0 B/s wr, 164 op/s
Oct 09 09:46:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:30.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:30 compute-1 sudo[106622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivotgefwpzjbnxmxtwbqxrvfodnisjuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003190.0152395-1188-133709992276870/AnsiballZ_systemd.py'
Oct 09 09:46:30 compute-1 sudo[106622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:30 compute-1 python3.9[106624]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:30 compute-1 sudo[106622]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:30 compute-1 sudo[106777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdtkpkhjyuesnrsqooroyxevowkfejmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003190.6008801-1188-44561707796764/AnsiballZ_systemd.py'
Oct 09 09:46:30 compute-1 sudo[106777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:31 compute-1 python3.9[106779]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:31 compute-1 sudo[106777]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:31.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:31 compute-1 sudo[106933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouswqvbkrekqrwwavlbcfaenbriracgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003191.178334-1188-106305293367650/AnsiballZ_systemd.py'
Oct 09 09:46:31 compute-1 sudo[106933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:31 compute-1 python3.9[106935]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:31 compute-1 sudo[106933]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:31 compute-1 ceph-mon[9795]: pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 0 B/s wr, 164 op/s
Oct 09 09:46:31 compute-1 sudo[107088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcncnsuinuaiyupqvthhoxjozhuqsrna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003191.7583196-1188-194658529925718/AnsiballZ_systemd.py'
Oct 09 09:46:31 compute-1 sudo[107088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:32.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:32 compute-1 python3.9[107090]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:32 compute-1 sudo[107088]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:32 compute-1 sudo[107243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fechhnfmocmwtibehfbbgcwqvpoxxcyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003192.3490953-1188-190392584615889/AnsiballZ_systemd.py'
Oct 09 09:46:32 compute-1 sudo[107243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:32 compute-1 python3.9[107245]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:32 compute-1 sudo[107243]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:33.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:33 compute-1 sudo[107398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnarhdiwkilignmqdinpaecjemlrissx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003192.943632-1188-270885714625138/AnsiballZ_systemd.py'
Oct 09 09:46:33 compute-1 sudo[107398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:33 compute-1 python3.9[107400]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:33 compute-1 sudo[107398]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:33 compute-1 sudo[107554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxbccxpblrrzajmdkuvvkypsbyrrdfil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003193.5256598-1188-214978430780957/AnsiballZ_systemd.py'
Oct 09 09:46:33 compute-1 sudo[107554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:33 compute-1 ceph-mon[9795]: pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 165 op/s
Oct 09 09:46:33 compute-1 python3.9[107556]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:34.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:34 compute-1 sudo[107554]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:34 compute-1 sudo[107709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilxhkdnbinlsfyvjetdoszrpvkkmgxlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003194.1056023-1188-274203865665433/AnsiballZ_systemd.py'
Oct 09 09:46:34 compute-1 sudo[107709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:34 compute-1 python3.9[107711]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:34 compute-1 sudo[107709]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:34 compute-1 sudo[107864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpmjgfhoartglhoypvsvotfnyvssjhhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003194.694977-1188-73051744592806/AnsiballZ_systemd.py'
Oct 09 09:46:34 compute-1 sudo[107864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:46:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:35.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:35 compute-1 python3.9[107866]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:35 compute-1 sudo[107864]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:35 compute-1 sudo[108020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxikvxiopqcdrsmfjxveaqkivtbiivjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003195.2728648-1188-10610026551098/AnsiballZ_systemd.py'
Oct 09 09:46:35 compute-1 sudo[108020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:35 compute-1 python3.9[108022]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:35 compute-1 sudo[108020]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:35 compute-1 ceph-mon[9795]: pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:46:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:36.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:46:36 compute-1 sudo[108175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrrepneprmksexpnksgdclpmmvspxnqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003195.8890345-1188-108340066587599/AnsiballZ_systemd.py'
Oct 09 09:46:36 compute-1 sudo[108175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:36 compute-1 python3.9[108177]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:36 compute-1 sudo[108175]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:36 compute-1 sudo[108330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcxrwyhxuonwrsdccratkmdseazgpgxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003196.4686074-1188-116772615221300/AnsiballZ_systemd.py'
Oct 09 09:46:36 compute-1 sudo[108330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:36 compute-1 python3.9[108332]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:36 compute-1 sudo[108330]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:37.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:37 compute-1 sudo[108485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikiularwjgozncqusnpebejyjmnojvud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003197.052153-1188-176020853346972/AnsiballZ_systemd.py'
Oct 09 09:46:37 compute-1 sudo[108485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:37 compute-1 python3.9[108487]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:37 compute-1 sudo[108485]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:37 compute-1 sudo[108641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uushratcjdmbufaeyuhtyyruligzurxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003197.6290023-1188-137364187865362/AnsiballZ_systemd.py'
Oct 09 09:46:37 compute-1 sudo[108641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:37 compute-1 ceph-mon[9795]: pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:46:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:38.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:38 compute-1 python3.9[108643]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:38 compute-1 sudo[108641]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:46:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:39.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:46:39 compute-1 ceph-mon[9795]: pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:46:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:40.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:46:40 compute-1 sudo[108807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nowckfjjskapiygossiytmhlrerdsufe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003200.2799232-1494-253692667620978/AnsiballZ_file.py'
Oct 09 09:46:40 compute-1 sudo[108807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:40 compute-1 podman[108771]: 2025-10-09 09:46:40.486270058 +0000 UTC m=+0.041619735 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 09 09:46:40 compute-1 python3.9[108813]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:46:40 compute-1 sudo[108807]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:40 compute-1 sudo[108965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mldxaxsvcrsbupckzedywhjdhpusofuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003200.749894-1494-238044902689958/AnsiballZ_file.py'
Oct 09 09:46:40 compute-1 sudo[108965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:41 compute-1 python3.9[108967]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:46:41 compute-1 sudo[108965]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:41.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:41 compute-1 sudo[109118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avuqpluwvbjsvkrbxbjojyrzqwjgsohu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003201.2070916-1494-242653264412626/AnsiballZ_file.py'
Oct 09 09:46:41 compute-1 sudo[109118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:41 compute-1 python3.9[109120]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:46:41 compute-1 sudo[109118]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:41 compute-1 sudo[109270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyohgvezormklirjbyzxbayrclmruseu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003201.653415-1494-154380583515318/AnsiballZ_file.py'
Oct 09 09:46:41 compute-1 sudo[109270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:41 compute-1 ceph-mon[9795]: pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:41 compute-1 python3.9[109272]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:46:42 compute-1 sudo[109270]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:42.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:42 compute-1 sudo[109422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwpuhbizsmbkufksrvnzumpoyeouexmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003202.1038377-1494-41139220198978/AnsiballZ_file.py'
Oct 09 09:46:42 compute-1 sudo[109422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:42 compute-1 python3.9[109424]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:46:42 compute-1 sudo[109422]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:42 compute-1 sudo[109574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ontpqgkmloavoblcdundzbmupzxxvoil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003202.6255772-1494-138359373546546/AnsiballZ_file.py'
Oct 09 09:46:42 compute-1 sudo[109574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:42 compute-1 python3.9[109576]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:46:42 compute-1 sudo[109574]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:43.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:43 compute-1 sudo[109727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-povdxphobqvqjgiqrpaufbcqdghlenmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003203.2617877-1623-94484127265197/AnsiballZ_stat.py'
Oct 09 09:46:43 compute-1 sudo[109727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:43 compute-1 python3.9[109729]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:43 compute-1 sudo[109727]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:43 compute-1 ceph-mon[9795]: pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:46:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:44.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:44 compute-1 sudo[109852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjasebjpwxdwwiqscgrvejcvqzuhjxwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003203.2617877-1623-94484127265197/AnsiballZ_copy.py'
Oct 09 09:46:44 compute-1 sudo[109852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:44 compute-1 python3.9[109854]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003203.2617877-1623-94484127265197/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:44 compute-1 sudo[109852]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:44 compute-1 sudo[110004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdgnhhjegjwbmomsaxcmmlceddqtfhpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003204.4043381-1623-247449231915489/AnsiballZ_stat.py'
Oct 09 09:46:44 compute-1 sudo[110004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:44 compute-1 python3.9[110006]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:44 compute-1 sudo[110004]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:44 compute-1 sudo[110129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrkyzamcqqggrpdqjnozdjgmurcuymmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003204.4043381-1623-247449231915489/AnsiballZ_copy.py'
Oct 09 09:46:44 compute-1 sudo[110129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:46:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:45.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:46:45 compute-1 python3.9[110131]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003204.4043381-1623-247449231915489/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:45 compute-1 sudo[110129]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:45 compute-1 sudo[110282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhhhklgrfekjtichunilajnvjalogwba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003205.254716-1623-102473465103626/AnsiballZ_stat.py'
Oct 09 09:46:45 compute-1 sudo[110282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:45 compute-1 python3.9[110284]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:45 compute-1 sudo[110282]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:45 compute-1 sudo[110407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrqlzeabzmtkwfivmabxnnqeyzjwhifv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003205.254716-1623-102473465103626/AnsiballZ_copy.py'
Oct 09 09:46:45 compute-1 sudo[110407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:45 compute-1 ceph-mon[9795]: pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:46:45 compute-1 python3.9[110409]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003205.254716-1623-102473465103626/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:46 compute-1 sudo[110407]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:46.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:46 compute-1 sudo[110559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqkcoojwqrneulswsktbueebxxyopesa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003206.1206498-1623-176114099442017/AnsiballZ_stat.py'
Oct 09 09:46:46 compute-1 sudo[110559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:46 compute-1 python3.9[110561]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:46 compute-1 sudo[110559]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:46 compute-1 sudo[110684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfzvpyzaznqjqxtojxwuubndzfccrlxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003206.1206498-1623-176114099442017/AnsiballZ_copy.py'
Oct 09 09:46:46 compute-1 sudo[110684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:46 compute-1 python3.9[110686]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003206.1206498-1623-176114099442017/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:46 compute-1 sudo[110684]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:47.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:47 compute-1 sudo[110836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zesadomcepsgmxiclwylrnstlxaqjwgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003206.967985-1623-40843198265761/AnsiballZ_stat.py'
Oct 09 09:46:47 compute-1 sudo[110836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:47 compute-1 python3.9[110838]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:47 compute-1 sudo[110836]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:47 compute-1 sudo[110962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yagissjnfqujoehowzfcfsvkfajwrjhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003206.967985-1623-40843198265761/AnsiballZ_copy.py'
Oct 09 09:46:47 compute-1 sudo[110962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:47 compute-1 python3.9[110964]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003206.967985-1623-40843198265761/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:47 compute-1 sudo[110962]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:47 compute-1 ceph-mon[9795]: pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:46:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:48.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:48 compute-1 sudo[111114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeqnjzdgiypmbtoxbyyjllfonkozfwes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003207.8430717-1623-272439944509838/AnsiballZ_stat.py'
Oct 09 09:46:48 compute-1 sudo[111114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:48 compute-1 python3.9[111116]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:48 compute-1 sudo[111114]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:48 compute-1 sudo[111239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvlorfnipceknoakqaozysfrkwlcssni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003207.8430717-1623-272439944509838/AnsiballZ_copy.py'
Oct 09 09:46:48 compute-1 sudo[111239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:48 compute-1 python3.9[111241]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003207.8430717-1623-272439944509838/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:48 compute-1 sudo[111239]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:48 compute-1 sudo[111391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwbqypgypsksezwvpmfbgxcpthbpfgfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003208.6829345-1623-166609101794916/AnsiballZ_stat.py'
Oct 09 09:46:48 compute-1 sudo[111391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:49 compute-1 python3.9[111393]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:49 compute-1 sudo[111391]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:49.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:49 compute-1 sudo[111514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzpvhlyyxngymgqxhqzuowfodacbmbwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003208.6829345-1623-166609101794916/AnsiballZ_copy.py'
Oct 09 09:46:49 compute-1 sudo[111514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:49 compute-1 python3.9[111516]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003208.6829345-1623-166609101794916/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:49 compute-1 sudo[111514]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:49 compute-1 sudo[111667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kudiaigfiwgqgyjjnfdkqwcbabmlgxix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003209.4848008-1623-114454026950569/AnsiballZ_stat.py'
Oct 09 09:46:49 compute-1 sudo[111667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:49 compute-1 python3.9[111669]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:49 compute-1 sudo[111667]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:49 compute-1 sudo[111672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:46:49 compute-1 sudo[111672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:46:49 compute-1 sudo[111672]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:50 compute-1 ceph-mon[9795]: pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:46:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:46:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:50.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:50 compute-1 sudo[111817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mukszrebqpngrsiedbmqbfeldchlrhvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003209.4848008-1623-114454026950569/AnsiballZ_copy.py'
Oct 09 09:46:50 compute-1 sudo[111817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:50 compute-1 python3.9[111819]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003209.4848008-1623-114454026950569/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:50 compute-1 sudo[111817]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:50 compute-1 sudo[111980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzwpyijrmkbopazdxmwxkrpygbyxmfei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003210.679171-1962-267404231364932/AnsiballZ_command.py'
Oct 09 09:46:50 compute-1 sudo[111980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:50 compute-1 podman[111943]: 2025-10-09 09:46:50.879439718 +0000 UTC m=+0.052134596 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 09:46:51 compute-1 python3.9[111989]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 09 09:46:51 compute-1 sudo[111980]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:51.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:51 compute-1 sudo[112148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zibixaxzyheywelpabbksokjtlhkvkwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003211.2331219-1989-91841001764052/AnsiballZ_file.py'
Oct 09 09:46:51 compute-1 sudo[112148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:51 compute-1 python3.9[112150]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:51 compute-1 sudo[112148]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:51 compute-1 sudo[112300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wearmunnhanmqyqrjnhjpjmuvocjtkki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003211.667529-1989-106230302322624/AnsiballZ_file.py'
Oct 09 09:46:51 compute-1 sudo[112300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:51 compute-1 python3.9[112302]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:52 compute-1 sudo[112300]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:52 compute-1 ceph-mon[9795]: pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:46:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:52.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:52 compute-1 sudo[112452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giavirbkpkcxovblujsgqhkbgwwibyww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003212.093171-1989-84213646687904/AnsiballZ_file.py'
Oct 09 09:46:52 compute-1 sudo[112452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:52 compute-1 python3.9[112454]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:52 compute-1 sudo[112452]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:52 compute-1 sudo[112604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duiixhsrortxtzrsjdaazrsylegubgfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003212.5024536-1989-48252526192081/AnsiballZ_file.py'
Oct 09 09:46:52 compute-1 sudo[112604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:52 compute-1 python3.9[112606]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:52 compute-1 sudo[112604]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:53 compute-1 sudo[112756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syphgykwmycqvqdbnascoxkcivdtolus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003212.9209769-1989-65987777884207/AnsiballZ_file.py'
Oct 09 09:46:53 compute-1 sudo[112756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:53.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:53 compute-1 python3.9[112758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:53 compute-1 sudo[112756]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:53 compute-1 sudo[112909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niwoznjeijbwrastfkgqciuoqlyfyzue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003213.3396409-1989-111116187877098/AnsiballZ_file.py'
Oct 09 09:46:53 compute-1 sudo[112909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:53 compute-1 python3.9[112911]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:53 compute-1 sudo[112909]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:53 compute-1 sudo[113061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeszsemvlskqvlvxnhgecxmabsrajhqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003213.762013-1989-271163135874355/AnsiballZ_file.py'
Oct 09 09:46:53 compute-1 sudo[113061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:54 compute-1 ceph-mon[9795]: pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:46:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:54.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:54 compute-1 python3.9[113063]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:54 compute-1 sudo[113061]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:54 compute-1 sudo[113213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyatxzskpquqnlkerbklblaiewscesxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003214.1795914-1989-254092727583431/AnsiballZ_file.py'
Oct 09 09:46:54 compute-1 sudo[113213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:54 compute-1 python3.9[113215]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:54 compute-1 sudo[113213]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:54 compute-1 sudo[113365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpkwgywqcjkvbnwzomwksyjzhuplxfuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003214.6015005-1989-34538988295591/AnsiballZ_file.py'
Oct 09 09:46:54 compute-1 sudo[113365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:54 compute-1 python3.9[113367]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:54 compute-1 sudo[113365]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:55.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:55 compute-1 sudo[113517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsjmrehfopplbqaevtxnvtfgtblvvcoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003215.0244634-1989-56233569814500/AnsiballZ_file.py'
Oct 09 09:46:55 compute-1 sudo[113517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:55 compute-1 python3.9[113519]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:55 compute-1 sudo[113517]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:55 compute-1 sudo[113670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywyppuquytsyzgtlkyasdjjeqvevpklv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003215.4508245-1989-122344282908919/AnsiballZ_file.py'
Oct 09 09:46:55 compute-1 sudo[113670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:55 compute-1 python3.9[113672]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:55 compute-1 sudo[113670]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:56.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:56 compute-1 ceph-mon[9795]: pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:56 compute-1 sudo[113822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dttqokgbsuovkvsdyyotwomqiiujbbij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003215.868592-1989-91382254963342/AnsiballZ_file.py'
Oct 09 09:46:56 compute-1 sudo[113822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:56 compute-1 python3.9[113824]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:56 compute-1 sudo[113822]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:56 compute-1 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 09 09:46:56 compute-1 sudo[113976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sekvnvllsvvhhzrxgbvicelwgocpusgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003216.2911732-1989-204342173099506/AnsiballZ_file.py'
Oct 09 09:46:56 compute-1 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 09 09:46:56 compute-1 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 09 09:46:56 compute-1 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 09 09:46:56 compute-1 sudo[113976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:56 compute-1 python3.9[113978]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:56 compute-1 sudo[113976]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:56 compute-1 sudo[114129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkoqayffklaxlkxakehhlhelnzezckxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003216.7430904-1989-131718879245944/AnsiballZ_file.py'
Oct 09 09:46:56 compute-1 sudo[114129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:57 compute-1 python3.9[114131]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:57 compute-1 sudo[114129]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:57.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:57 compute-1 sudo[114282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dflnrjjzququekbpxopifiguvwiwscse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003217.602828-2286-211289499365653/AnsiballZ_stat.py'
Oct 09 09:46:57 compute-1 sudo[114282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:57 compute-1 python3.9[114284]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:57 compute-1 sudo[114282]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:58.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:58 compute-1 ceph-mon[9795]: pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:46:58 compute-1 sudo[114405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kruhpfxsjmocwqxvyapjkwcveopqsuug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003217.602828-2286-211289499365653/AnsiballZ_copy.py'
Oct 09 09:46:58 compute-1 sudo[114405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:58 compute-1 python3.9[114407]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003217.602828-2286-211289499365653/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:58 compute-1 sudo[114405]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:58 compute-1 sudo[114557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twgeqszsfpoeodpsrqfzciiiouertwvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003218.4361467-2286-80334658557767/AnsiballZ_stat.py'
Oct 09 09:46:58 compute-1 sudo[114557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:58 compute-1 python3.9[114559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:58 compute-1 sudo[114557]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:58 compute-1 sudo[114680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jabvmvmwsxnixzejygvnlqvkkhmwlqub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003218.4361467-2286-80334658557767/AnsiballZ_copy.py'
Oct 09 09:46:58 compute-1 sudo[114680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:46:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:59.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:59 compute-1 python3.9[114682]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003218.4361467-2286-80334658557767/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:59 compute-1 sudo[114680]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:59 compute-1 sudo[114833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eddnextqzittlkbborhdhgptvfvpbpbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003219.2670748-2286-149914489264776/AnsiballZ_stat.py'
Oct 09 09:46:59 compute-1 sudo[114833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:59 compute-1 python3.9[114835]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:59 compute-1 sudo[114833]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:59 compute-1 sudo[114956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmzymjzkqiydtuhmdvusxbzjxpcazaml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003219.2670748-2286-149914489264776/AnsiballZ_copy.py'
Oct 09 09:46:59 compute-1 sudo[114956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:00 compute-1 python3.9[114958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003219.2670748-2286-149914489264776/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:00.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:00 compute-1 sudo[114956]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:00 compute-1 ceph-mon[9795]: pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:00 compute-1 sudo[115108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hplazumezlhcmmqhgusvmxxqoqyskqjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003220.1324515-2286-223116848468069/AnsiballZ_stat.py'
Oct 09 09:47:00 compute-1 sudo[115108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:00 compute-1 python3.9[115110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:00 compute-1 sudo[115108]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:00 compute-1 sudo[115231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceirsbqdztlvwhndywoywkizfwprfnfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003220.1324515-2286-223116848468069/AnsiballZ_copy.py'
Oct 09 09:47:00 compute-1 sudo[115231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:00 compute-1 python3.9[115233]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003220.1324515-2286-223116848468069/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:00 compute-1 sudo[115231]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:01.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:01 compute-1 sudo[115383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umlnjexyxetvkbqyncujpvmulotjzkxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003220.9936302-2286-136501184197750/AnsiballZ_stat.py'
Oct 09 09:47:01 compute-1 sudo[115383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:01 compute-1 python3.9[115385]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:01 compute-1 sudo[115383]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:01 compute-1 sudo[115507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciwquwpaytlvsbwhtsfjtfqcuzwcanuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003220.9936302-2286-136501184197750/AnsiballZ_copy.py'
Oct 09 09:47:01 compute-1 sudo[115507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:01 compute-1 python3.9[115509]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003220.9936302-2286-136501184197750/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:01 compute-1 sudo[115507]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:02.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:02 compute-1 ceph-mon[9795]: pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:02 compute-1 sudo[115659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvfanpdwkhcpmhhnjnmbxowgxhialzpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003221.878107-2286-277151286130713/AnsiballZ_stat.py'
Oct 09 09:47:02 compute-1 sudo[115659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:02 compute-1 python3.9[115661]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:02 compute-1 sudo[115659]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:02 compute-1 sudo[115782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbzcccjyloqrfdjpmcgoaguesibavdiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003221.878107-2286-277151286130713/AnsiballZ_copy.py'
Oct 09 09:47:02 compute-1 sudo[115782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:02 compute-1 python3.9[115784]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003221.878107-2286-277151286130713/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:02 compute-1 sudo[115782]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:02 compute-1 sudo[115934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyunvptwjdkqjxloxpkatzzufxzqqorb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003222.8004577-2286-59029059534338/AnsiballZ_stat.py'
Oct 09 09:47:02 compute-1 sudo[115934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:03.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:03 compute-1 python3.9[115936]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:03 compute-1 sudo[115934]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:03 compute-1 sudo[116058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsmuajholeguzdutmydojktgjquozubf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003222.8004577-2286-59029059534338/AnsiballZ_copy.py'
Oct 09 09:47:03 compute-1 sudo[116058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:03 compute-1 python3.9[116060]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003222.8004577-2286-59029059534338/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:03 compute-1 sudo[116058]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:03 compute-1 sudo[116210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhomcgujltixiqhwdjazldbrkywxfjwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003223.6758366-2286-147574103004957/AnsiballZ_stat.py'
Oct 09 09:47:03 compute-1 sudo[116210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:04 compute-1 python3.9[116212]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:04 compute-1 sudo[116210]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:04.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:04 compute-1 ceph-mon[9795]: pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:04 compute-1 sudo[116333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohrveaylgonhehizzjwbsinfstcyihpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003223.6758366-2286-147574103004957/AnsiballZ_copy.py'
Oct 09 09:47:04 compute-1 sudo[116333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:04 compute-1 python3.9[116335]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003223.6758366-2286-147574103004957/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:04 compute-1 sudo[116333]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:04 compute-1 sudo[116485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnesmhekpnesljtbfrultklamtbhiytf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003224.557896-2286-44985817347735/AnsiballZ_stat.py'
Oct 09 09:47:04 compute-1 sudo[116485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:04 compute-1 python3.9[116487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:04 compute-1 sudo[116485]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:47:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:05.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:05 compute-1 sudo[116608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhznkbmmiiljwveauscwcbjgjiionpkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003224.557896-2286-44985817347735/AnsiballZ_copy.py'
Oct 09 09:47:05 compute-1 sudo[116608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:05 compute-1 python3.9[116610]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003224.557896-2286-44985817347735/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:05 compute-1 sudo[116608]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:05 compute-1 sudo[116761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdaolfswwqfvginlovtpftmuxxwpjbqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003225.431307-2286-192413514458065/AnsiballZ_stat.py'
Oct 09 09:47:05 compute-1 sudo[116761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:05 compute-1 python3.9[116763]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:05 compute-1 sudo[116761]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:06.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:06 compute-1 sudo[116884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkjivuhdcifkwynbtzkoljzgvzwuvvjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003225.431307-2286-192413514458065/AnsiballZ_copy.py'
Oct 09 09:47:06 compute-1 sudo[116884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:06 compute-1 ceph-mon[9795]: pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:06 compute-1 python3.9[116886]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003225.431307-2286-192413514458065/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:06 compute-1 sudo[116884]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:06 compute-1 sudo[117036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptdzfruzbsocfdrccpudimdiajocgvzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003226.3589654-2286-11005073057086/AnsiballZ_stat.py'
Oct 09 09:47:06 compute-1 sudo[117036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:06 compute-1 python3.9[117038]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:06 compute-1 sudo[117036]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:07 compute-1 sudo[117159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnummoofmjomycuqclhqckbbrxhqdhqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003226.3589654-2286-11005073057086/AnsiballZ_copy.py'
Oct 09 09:47:07 compute-1 sudo[117159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:07.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:07 compute-1 python3.9[117161]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003226.3589654-2286-11005073057086/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:07 compute-1 sudo[117159]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:07 compute-1 sudo[117312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngvsvpbmdytgnmiquseosbtqirpmbwov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003227.327268-2286-8652316216764/AnsiballZ_stat.py'
Oct 09 09:47:07 compute-1 sudo[117312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:07 compute-1 python3.9[117314]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:07 compute-1 sudo[117312]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:07 compute-1 sudo[117435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkfhehiaysezatrpsoufhyiiwdjnxgww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003227.327268-2286-8652316216764/AnsiballZ_copy.py'
Oct 09 09:47:07 compute-1 sudo[117435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:08.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:08 compute-1 ceph-mon[9795]: pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:08 compute-1 python3.9[117437]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003227.327268-2286-8652316216764/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:08 compute-1 sudo[117435]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:08 compute-1 sudo[117587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spdytogsbefsqhrjvpqomyjohkdkffyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003228.193715-2286-89789867298836/AnsiballZ_stat.py'
Oct 09 09:47:08 compute-1 sudo[117587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:08 compute-1 python3.9[117589]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:08 compute-1 sudo[117587]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:08 compute-1 sudo[117710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgaqyzladcblznnyfyometylbyvezfvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003228.193715-2286-89789867298836/AnsiballZ_copy.py'
Oct 09 09:47:08 compute-1 sudo[117710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:08 compute-1 python3.9[117712]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003228.193715-2286-89789867298836/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:08 compute-1 sudo[117710]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:09.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:09 compute-1 sudo[117862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juxsyxzrcxatjmiyxtwmvidafovmerhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003229.0156772-2286-272140018624589/AnsiballZ_stat.py'
Oct 09 09:47:09 compute-1 sudo[117862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:09 compute-1 python3.9[117864]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:09 compute-1 sudo[117862]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:09 compute-1 sudo[117986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixrjidfeqbvqwlcftxxybrwfygwvwnml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003229.0156772-2286-272140018624589/AnsiballZ_copy.py'
Oct 09 09:47:09 compute-1 sudo[117986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:09 compute-1 python3.9[117988]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003229.0156772-2286-272140018624589/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:09 compute-1 sudo[117986]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:09 compute-1 sudo[118013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:47:09 compute-1 sudo[118013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:47:09 compute-1 sudo[118013]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:47:10.026 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:47:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:47:10.026 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:47:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:47:10.026 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:47:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:10.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:10 compute-1 ceph-mon[9795]: pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:10 compute-1 python3.9[118163]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:47:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:10 compute-1 sudo[118325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hicnhymxwcqmkhnztmkxqxqkhwaspuji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003230.535569-2904-189453404219737/AnsiballZ_seboolean.py'
Oct 09 09:47:10 compute-1 sudo[118325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:10 compute-1 podman[118290]: 2025-10-09 09:47:10.887276528 +0000 UTC m=+0.064951752 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 09 09:47:11 compute-1 python3.9[118334]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 09 09:47:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:11.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:11 compute-1 sudo[118325]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:47:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:47:12 compute-1 ceph-mon[9795]: pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:12 compute-1 sudo[118489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzqpzmhbkhilqlayhknzophzafxsodvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003231.9670596-2928-65704756896196/AnsiballZ_copy.py'
Oct 09 09:47:12 compute-1 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 09 09:47:12 compute-1 sudo[118489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:12 compute-1 python3.9[118491]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:12 compute-1 sudo[118489]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:12 compute-1 sudo[118641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlgzbcokjhqggcnzyxuicdncomacfeqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003232.417617-2928-59185333238569/AnsiballZ_copy.py'
Oct 09 09:47:12 compute-1 sudo[118641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:12 compute-1 python3.9[118643]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:12 compute-1 sudo[118641]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:13 compute-1 sudo[118793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbwdlrecndyefbbltzjxkyhwjziuaduc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003232.8700407-2928-145845383945171/AnsiballZ_copy.py'
Oct 09 09:47:13 compute-1 sudo[118793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:47:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:13.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:47:13 compute-1 python3.9[118795]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:13 compute-1 sudo[118793]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:13 compute-1 sudo[118946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrwnomyhtkpzazcrfaviqxptercouzzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003233.3276687-2928-15362641180441/AnsiballZ_copy.py'
Oct 09 09:47:13 compute-1 sudo[118946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:13 compute-1 python3.9[118948]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:13 compute-1 sudo[118946]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:13 compute-1 sudo[119098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfjknrppuknlwtskmgiasqksckwrsksp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003233.765488-2928-66490750944030/AnsiballZ_copy.py'
Oct 09 09:47:13 compute-1 sudo[119098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:14.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:14 compute-1 python3.9[119100]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:14 compute-1 ceph-mon[9795]: pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:14 compute-1 sudo[119098]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:14 compute-1 sudo[119250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfkpibrhkngzixwlpamrqofjzubsfdxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003234.268808-3036-9668538473270/AnsiballZ_copy.py'
Oct 09 09:47:14 compute-1 sudo[119250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:14 compute-1 python3.9[119252]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:14 compute-1 sudo[119250]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:14 compute-1 sudo[119402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyjbpwhnowsboakgmahjaqdkprrlufmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003234.7275054-3036-82287570501426/AnsiballZ_copy.py'
Oct 09 09:47:14 compute-1 sudo[119402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:15 compute-1 python3.9[119404]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:15 compute-1 sudo[119402]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:15.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:15 compute-1 sudo[119555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkigcvttmueqkrblewaapgmcnynmgwma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003235.205292-3036-254435986335067/AnsiballZ_copy.py'
Oct 09 09:47:15 compute-1 sudo[119555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:15 compute-1 python3.9[119557]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:15 compute-1 sudo[119555]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:15 compute-1 sudo[119707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrhixgzvvxmaaxgdupjgqlpwdwokbbrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003235.7011762-3036-165007199788983/AnsiballZ_copy.py'
Oct 09 09:47:15 compute-1 sudo[119707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:16.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:16 compute-1 python3.9[119709]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:16 compute-1 sudo[119707]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:16 compute-1 ceph-mon[9795]: pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:16 compute-1 sudo[119859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peghtqjhuchuiogslphertoazapftmqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003236.2426417-3036-98430560852165/AnsiballZ_copy.py'
Oct 09 09:47:16 compute-1 sudo[119859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:16 compute-1 python3.9[119861]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:16 compute-1 sudo[119859]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:17.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:17 compute-1 sudo[120011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlckznynlaaanccnamprbmvmxoakggqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003237.0521648-3144-61037446533563/AnsiballZ_systemd.py'
Oct 09 09:47:17 compute-1 sudo[120011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:17 compute-1 python3.9[120013]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:47:17 compute-1 systemd[1]: Reloading.
Oct 09 09:47:17 compute-1 systemd-sysv-generator[120038]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:17 compute-1 systemd-rc-local-generator[120035]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:17 compute-1 systemd[1]: Starting dnf makecache...
Oct 09 09:47:17 compute-1 systemd[1]: Starting libvirt logging daemon socket...
Oct 09 09:47:17 compute-1 systemd[1]: Listening on libvirt logging daemon socket.
Oct 09 09:47:17 compute-1 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 09 09:47:17 compute-1 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 09 09:47:17 compute-1 systemd[1]: Starting libvirt logging daemon...
Oct 09 09:47:17 compute-1 systemd[1]: Started libvirt logging daemon.
Oct 09 09:47:17 compute-1 sudo[120011]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:17 compute-1 dnf[120050]: Metadata cache refreshed recently.
Oct 09 09:47:17 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 09 09:47:17 compute-1 systemd[1]: Finished dnf makecache.
Oct 09 09:47:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:18.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:18 compute-1 ceph-mon[9795]: pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:18 compute-1 sudo[120205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-potqqujzbsimvhumdpcgtobncfksituh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003237.977586-3144-150532936240223/AnsiballZ_systemd.py'
Oct 09 09:47:18 compute-1 sudo[120205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:18 compute-1 python3.9[120207]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:47:18 compute-1 systemd[1]: Reloading.
Oct 09 09:47:18 compute-1 systemd-sysv-generator[120235]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:18 compute-1 systemd-rc-local-generator[120231]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:18 compute-1 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 09 09:47:18 compute-1 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 09 09:47:18 compute-1 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 09 09:47:18 compute-1 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 09 09:47:18 compute-1 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 09 09:47:18 compute-1 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 09 09:47:18 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Oct 09 09:47:18 compute-1 systemd[1]: Started libvirt nodedev daemon.
Oct 09 09:47:18 compute-1 sudo[120205]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:19 compute-1 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 09 09:47:19 compute-1 sudo[120420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uscusbpoysqvwapstdbmdkfsusclxxzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003238.8425837-3144-272121284305973/AnsiballZ_systemd.py'
Oct 09 09:47:19 compute-1 sudo[120420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:19.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:19 compute-1 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 09 09:47:19 compute-1 python3.9[120423]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:47:19 compute-1 systemd[1]: Reloading.
Oct 09 09:47:19 compute-1 systemd-rc-local-generator[120446]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:19 compute-1 systemd-sysv-generator[120449]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:19 compute-1 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 09 09:47:19 compute-1 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 09 09:47:19 compute-1 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 09 09:47:19 compute-1 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 09 09:47:19 compute-1 systemd[1]: Starting libvirt proxy daemon...
Oct 09 09:47:19 compute-1 systemd[1]: Started libvirt proxy daemon.
Oct 09 09:47:19 compute-1 sudo[120420]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:19 compute-1 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 09 09:47:19 compute-1 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 09 09:47:19 compute-1 sudo[120640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svevjxyibhnxmnmzonstfllvkuiqfyzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003239.7340775-3144-200785260538745/AnsiballZ_systemd.py'
Oct 09 09:47:19 compute-1 sudo[120640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:20.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:20 compute-1 ceph-mon[9795]: pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:47:20 compute-1 python3.9[120642]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:47:20 compute-1 systemd[1]: Reloading.
Oct 09 09:47:20 compute-1 systemd-rc-local-generator[120662]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:20 compute-1 systemd-sysv-generator[120666]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:20 compute-1 setroubleshoot[120422]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l efc5bd63-4429-4b01-9c17-474f112f439f
Oct 09 09:47:20 compute-1 setroubleshoot[120422]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 09 09:47:20 compute-1 setroubleshoot[120422]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l efc5bd63-4429-4b01-9c17-474f112f439f
Oct 09 09:47:20 compute-1 setroubleshoot[120422]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 09 09:47:20 compute-1 systemd[1]: Listening on libvirt locking daemon socket.
Oct 09 09:47:20 compute-1 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 09 09:47:20 compute-1 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 09 09:47:20 compute-1 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 09 09:47:20 compute-1 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 09 09:47:20 compute-1 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 09 09:47:20 compute-1 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 09 09:47:20 compute-1 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 09 09:47:20 compute-1 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 09 09:47:20 compute-1 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 09 09:47:20 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Oct 09 09:47:20 compute-1 systemd[1]: Started libvirt QEMU daemon.
Oct 09 09:47:20 compute-1 sudo[120640]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:20 compute-1 sudo[120854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szjthyweyeobpmquvalarbauqaijjzye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003240.6165721-3144-137274376194056/AnsiballZ_systemd.py'
Oct 09 09:47:20 compute-1 sudo[120854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:21 compute-1 python3.9[120856]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:47:21 compute-1 systemd[1]: Reloading.
Oct 09 09:47:21 compute-1 systemd-rc-local-generator[120897]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:21.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:21 compute-1 systemd-sysv-generator[120900]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:21 compute-1 podman[120858]: 2025-10-09 09:47:21.168925831 +0000 UTC m=+0.072513290 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:47:21 compute-1 systemd[1]: Starting libvirt secret daemon socket...
Oct 09 09:47:21 compute-1 systemd[1]: Listening on libvirt secret daemon socket.
Oct 09 09:47:21 compute-1 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 09 09:47:21 compute-1 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 09 09:47:21 compute-1 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 09 09:47:21 compute-1 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 09 09:47:21 compute-1 systemd[1]: Starting libvirt secret daemon...
Oct 09 09:47:21 compute-1 systemd[1]: Started libvirt secret daemon.
Oct 09 09:47:21 compute-1 sudo[120854]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:21 compute-1 sudo[121088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siyigonvetbfaejtwrbufhnsqrocyvkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003241.6419294-3255-50947836854641/AnsiballZ_file.py'
Oct 09 09:47:21 compute-1 sudo[121088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:21 compute-1 python3.9[121090]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:22 compute-1 sudo[121088]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:22.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:22 compute-1 ceph-mon[9795]: pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:22 compute-1 sudo[121240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqzjrdfcwbmwwymekswwdizddmqlegsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003242.1549914-3279-28232708185424/AnsiballZ_find.py'
Oct 09 09:47:22 compute-1 sudo[121240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:22 compute-1 python3.9[121242]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 09:47:22 compute-1 sudo[121240]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:22 compute-1 auditd[730]: Audit daemon rotating log files
Oct 09 09:47:22 compute-1 sudo[121392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uemiqjfxkhmeljjzakiiribustuldkup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003242.64788-3303-213019004631500/AnsiballZ_command.py'
Oct 09 09:47:22 compute-1 sudo[121392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:22 compute-1 python3.9[121394]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:47:23 compute-1 sudo[121392]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:23 compute-1 sudo[121423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:47:23 compute-1 sudo[121423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:47:23 compute-1 sudo[121423]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:23.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:23 compute-1 sudo[121448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:47:23 compute-1 sudo[121448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:47:23 compute-1 sudo[121448]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:23 compute-1 python3.9[121613]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 09:47:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:47:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:24.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:47:24 compute-1 ceph-mon[9795]: pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 09 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 09 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:47:24 compute-1 python3.9[121777]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:24 compute-1 python3.9[121898]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003243.9599235-3360-786994992123/.source.xml follow=False _original_basename=secret.xml.j2 checksum=c150843fcb80d0d0a9968a12abeb036b918e43ed backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:25 compute-1 sudo[122048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybjqprzsxdqrtrscivgirqunfpehnwcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003244.9278688-3405-75502521404096/AnsiballZ_command.py'
Oct 09 09:47:25 compute-1 sudo[122048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:25.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:25 compute-1 python3.9[122050]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 286f8bf0-da72-5823-9a4e-ac4457d9e609
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:47:25 compute-1 polkitd[1120]: Registered Authentication Agent for unix-process:122052:92891 (system bus name :1.1276 [/usr/bin/pkttyagent --process 122052 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 09 09:47:25 compute-1 polkitd[1120]: Unregistered Authentication Agent for unix-process:122052:92891 (system bus name :1.1276, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 09 09:47:25 compute-1 polkitd[1120]: Registered Authentication Agent for unix-process:122051:92891 (system bus name :1.1277 [/usr/bin/pkttyagent --process 122051 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 09 09:47:25 compute-1 polkitd[1120]: Unregistered Authentication Agent for unix-process:122051:92891 (system bus name :1.1277, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 09 09:47:25 compute-1 sudo[122048]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:25 compute-1 python3.9[122213]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:26.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:26 compute-1 ceph-mon[9795]: pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:26 compute-1 sudo[122363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwfcrsxktasmmrglszlpwxrbdlwhixlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003246.0996299-3453-194881055965384/AnsiballZ_command.py'
Oct 09 09:47:26 compute-1 sudo[122363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:26 compute-1 sudo[122363]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:26 compute-1 sudo[122516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkiljmsqyzdpeuvwsgavfwojtwhrquxo ; FSID=286f8bf0-da72-5823-9a4e-ac4457d9e609 KEY=AQBWgedoAAAAABAA+vk8nE5nieplThBL84fakw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003246.656215-3477-120174493510964/AnsiballZ_command.py'
Oct 09 09:47:26 compute-1 sudo[122516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:26 compute-1 sudo[122519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:47:26 compute-1 sudo[122519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:47:26 compute-1 sudo[122519]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:27 compute-1 polkitd[1120]: Registered Authentication Agent for unix-process:122544:93061 (system bus name :1.1281 [/usr/bin/pkttyagent --process 122544 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 09 09:47:27 compute-1 polkitd[1120]: Unregistered Authentication Agent for unix-process:122544:93061 (system bus name :1.1281, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 09 09:47:27 compute-1 sudo[122516]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:27.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:27 compute-1 sudo[122700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dklilfiivxpfxphsasufilylqfwcdvnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003247.2270224-3501-24293610505785/AnsiballZ_copy.py'
Oct 09 09:47:27 compute-1 sudo[122700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:27 compute-1 python3.9[122702]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:27 compute-1 sudo[122700]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:27 compute-1 ceph-mon[9795]: pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:47:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:47:27 compute-1 sudo[122852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnfrvhbzhdltbxbavvtnlarhuglxytyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003247.7633202-3525-72180035016196/AnsiballZ_stat.py'
Oct 09 09:47:27 compute-1 sudo[122852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:47:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:28.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:47:28 compute-1 python3.9[122854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:28 compute-1 sudo[122852]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:28 compute-1 sudo[122975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eonqnefyblsaetgxcilrkhgvofrjenpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003247.7633202-3525-72180035016196/AnsiballZ_copy.py'
Oct 09 09:47:28 compute-1 sudo[122975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:28 compute-1 python3.9[122977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003247.7633202-3525-72180035016196/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:28 compute-1 sudo[122975]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:29.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:29 compute-1 sudo[123127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmqmblvnypbgplvhullbsgkiswnsvzoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003249.1228986-3573-12598764317056/AnsiballZ_file.py'
Oct 09 09:47:29 compute-1 sudo[123127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:29 compute-1 python3.9[123129]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:29 compute-1 sudo[123127]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:29 compute-1 ceph-mon[9795]: pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:29 compute-1 sudo[123280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrslzveqwstvduntamcjsdlsbyffbwex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003249.6533437-3597-169110011090918/AnsiballZ_stat.py'
Oct 09 09:47:29 compute-1 sudo[123280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:29 compute-1 sudo[123283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:47:29 compute-1 sudo[123283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:47:29 compute-1 sudo[123283]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:30 compute-1 python3.9[123282]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:30 compute-1 sudo[123280]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:30.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:30 compute-1 sudo[123383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrqdffnsbvwkujkvitttktyhptjlrfik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003249.6533437-3597-169110011090918/AnsiballZ_file.py'
Oct 09 09:47:30 compute-1 sudo[123383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:30 compute-1 python3.9[123385]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:30 compute-1 sudo[123383]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:30 compute-1 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 09 09:47:30 compute-1 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 09 09:47:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:30 compute-1 sudo[123535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkhtlgokwqbeetcnbyavsnpmcfxiezra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003250.5623877-3633-180867084237708/AnsiballZ_stat.py'
Oct 09 09:47:30 compute-1 sudo[123535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:30 compute-1 python3.9[123537]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:30 compute-1 sudo[123535]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:31 compute-1 sudo[123613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlqffaesvrehradygqqnkakvnuagvkfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003250.5623877-3633-180867084237708/AnsiballZ_file.py'
Oct 09 09:47:31 compute-1 sudo[123613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:31.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:31 compute-1 python3.9[123615]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.z10upoo7 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:31 compute-1 sudo[123613]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:31 compute-1 sudo[123766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgnkvrxymcjyflxulynvrjyhgtvyleua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003251.4001625-3669-230332263595267/AnsiballZ_stat.py'
Oct 09 09:47:31 compute-1 sudo[123766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:31 compute-1 python3.9[123768]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:31 compute-1 sudo[123766]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:31 compute-1 ceph-mon[9795]: pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:31 compute-1 sudo[123844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-looyclxxbzjzhnfziswkwvegrmxpfvra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003251.4001625-3669-230332263595267/AnsiballZ_file.py'
Oct 09 09:47:31 compute-1 sudo[123844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:32.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:32 compute-1 python3.9[123846]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:32 compute-1 sudo[123844]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:32 compute-1 sudo[123996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqqtizitysgnqmtvhxhdxqrvtqemyxcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003252.349359-3708-209115308287513/AnsiballZ_command.py'
Oct 09 09:47:32 compute-1 sudo[123996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:32 compute-1 python3.9[123998]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:47:32 compute-1 sudo[123996]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:33.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:33 compute-1 sudo[124149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyxvoaobrnglfcavqvdgpkqrsbyizuwo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760003252.8851209-3732-161136396930314/AnsiballZ_edpm_nftables_from_files.py'
Oct 09 09:47:33 compute-1 sudo[124149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:33 compute-1 python3[124151]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 09 09:47:33 compute-1 sudo[124149]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:33 compute-1 sudo[124302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyhvfmuwkqbmywsuzdbggkrrpijdkfwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003253.5115488-3756-266008284710965/AnsiballZ_stat.py'
Oct 09 09:47:33 compute-1 sudo[124302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:33 compute-1 ceph-mon[9795]: pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:33 compute-1 python3.9[124304]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:33 compute-1 sudo[124302]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:34 compute-1 sudo[124380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fimfwevwlirsyemepkxfhantbbvjsqnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003253.5115488-3756-266008284710965/AnsiballZ_file.py'
Oct 09 09:47:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:34 compute-1 sudo[124380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:34.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:34 compute-1 python3.9[124382]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:34 compute-1 sudo[124380]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:34 compute-1 sudo[124532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sumnrbkinurxtoqrvqyzcswxvkaqqcbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003254.3849423-3792-104583164099576/AnsiballZ_stat.py'
Oct 09 09:47:34 compute-1 sudo[124532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:34 compute-1 python3.9[124534]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:34 compute-1 sudo[124532]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:47:34 compute-1 sudo[124610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptqwdwdluwlxxdhkqvhjnnonlhzdbpkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003254.3849423-3792-104583164099576/AnsiballZ_file.py'
Oct 09 09:47:34 compute-1 sudo[124610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:35 compute-1 python3.9[124612]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:35 compute-1 sudo[124610]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:35.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:35 compute-1 sudo[124763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywqtgnicdnenqzqelcvcfcxulqawvghe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003255.2640011-3828-242119901311843/AnsiballZ_stat.py'
Oct 09 09:47:35 compute-1 sudo[124763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:35 compute-1 python3.9[124765]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:35 compute-1 sudo[124763]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:35 compute-1 sudo[124841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skuekgorfwcjgncjupcubcvlnzwhshhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003255.2640011-3828-242119901311843/AnsiballZ_file.py'
Oct 09 09:47:35 compute-1 sudo[124841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:35 compute-1 ceph-mon[9795]: pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:35 compute-1 python3.9[124843]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:35 compute-1 sudo[124841]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:36.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:36 compute-1 sudo[124993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pslxdvbjszdpetnlaelglljgthfygihz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003256.1286006-3864-253854157070905/AnsiballZ_stat.py'
Oct 09 09:47:36 compute-1 sudo[124993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:36 compute-1 python3.9[124995]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:36 compute-1 sudo[124993]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:36 compute-1 sudo[125071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyqnuurbpgzjuniosqzeisealjsobwmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003256.1286006-3864-253854157070905/AnsiballZ_file.py'
Oct 09 09:47:36 compute-1 sudo[125071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:36 compute-1 python3.9[125073]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:36 compute-1 sudo[125071]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:37.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:37 compute-1 sudo[125223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asvmleqlcudpnhuknwejqarxakqbcphg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003256.994507-3900-42259446352375/AnsiballZ_stat.py'
Oct 09 09:47:37 compute-1 sudo[125223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:37 compute-1 python3.9[125225]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:37 compute-1 sudo[125223]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:37 compute-1 sudo[125349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbzrcpvjfndaheztakzohbdsplvfuqzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003256.994507-3900-42259446352375/AnsiballZ_copy.py'
Oct 09 09:47:37 compute-1 sudo[125349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:37 compute-1 python3.9[125351]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003256.994507-3900-42259446352375/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:37 compute-1 sudo[125349]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:37 compute-1 ceph-mon[9795]: pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:47:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:38.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:47:38 compute-1 sudo[125501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuvntuuralawybpjthpxlwmaiecocdtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003257.980516-3945-52059326980028/AnsiballZ_file.py'
Oct 09 09:47:38 compute-1 sudo[125501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:38 compute-1 python3.9[125503]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:38 compute-1 sudo[125501]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:38 compute-1 sudo[125653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boimzlrssaheyvpzmcieoseddeyflkik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003258.4960692-3969-65553834620616/AnsiballZ_command.py'
Oct 09 09:47:38 compute-1 sudo[125653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:38 compute-1 python3.9[125655]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:47:38 compute-1 sudo[125653]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:39.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:39 compute-1 sudo[125808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhnnoghwocdyufgsxsqaagpqmnqfqnhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003258.9936805-3993-137738599139163/AnsiballZ_blockinfile.py'
Oct 09 09:47:39 compute-1 sudo[125808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:39 compute-1 python3.9[125810]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:39 compute-1 sudo[125808]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:39 compute-1 ceph-mon[9795]: pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:39 compute-1 sudo[125961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqgmifzhdffwldjttmvhiwcromalwyep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003259.7286997-4020-166983159841117/AnsiballZ_command.py'
Oct 09 09:47:39 compute-1 sudo[125961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:47:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:40.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:47:40 compute-1 python3.9[125963]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:47:40 compute-1 sudo[125961]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:40 compute-1 sudo[126114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaibmtfzmiuaaicmderjgfnfnkfhwemp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003260.2592163-4044-233781516468907/AnsiballZ_stat.py'
Oct 09 09:47:40 compute-1 sudo[126114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:40 compute-1 python3.9[126116]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:47:40 compute-1 sudo[126114]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:41 compute-1 sudo[126277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoldavvwegxdlwqrbieafxmujdtvzewt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003260.8040679-4068-228443390900039/AnsiballZ_command.py'
Oct 09 09:47:41 compute-1 sudo[126277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:41 compute-1 podman[126242]: 2025-10-09 09:47:41.030226759 +0000 UTC m=+0.044737486 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 09 09:47:41 compute-1 python3.9[126284]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:47:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:41.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:41 compute-1 sudo[126277]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:41 compute-1 sudo[126440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iowahgahbqzlmhlsinaribzfpcweqige ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003261.3661811-4092-157951109699960/AnsiballZ_file.py'
Oct 09 09:47:41 compute-1 sudo[126440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:41 compute-1 python3.9[126442]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:41 compute-1 sudo[126440]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:41 compute-1 ceph-mon[9795]: pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:42 compute-1 sudo[126592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gesqbtdnrjimbqqhouhexncojatbpnze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003261.8802435-4116-60463342325944/AnsiballZ_stat.py'
Oct 09 09:47:42 compute-1 sudo[126592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:42.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:42 compute-1 python3.9[126594]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:42 compute-1 sudo[126592]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:42 compute-1 sudo[126715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmrayvwpsrkjszroquurqgnjiltznxbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003261.8802435-4116-60463342325944/AnsiballZ_copy.py'
Oct 09 09:47:42 compute-1 sudo[126715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:42 compute-1 python3.9[126717]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003261.8802435-4116-60463342325944/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:42 compute-1 sudo[126715]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:43 compute-1 sudo[126867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdrtndjhxyzertteynbapzeohoixlpkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003262.874545-4161-32603034343188/AnsiballZ_stat.py'
Oct 09 09:47:43 compute-1 sudo[126867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:43.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:43 compute-1 python3.9[126869]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:43 compute-1 sudo[126867]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:43 compute-1 sudo[126991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfxbybhhcvufifhcfegfpehkajrkvwxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003262.874545-4161-32603034343188/AnsiballZ_copy.py'
Oct 09 09:47:43 compute-1 sudo[126991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:43 compute-1 python3.9[126993]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003262.874545-4161-32603034343188/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:43 compute-1 sudo[126991]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:43 compute-1 ceph-mon[9795]: pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:47:44 compute-1 sudo[127143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifgqgabuyuajeoludsnmcyjdswuxkrpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003263.8493674-4206-1931520554495/AnsiballZ_stat.py'
Oct 09 09:47:44 compute-1 sudo[127143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:44.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:44 compute-1 python3.9[127145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:44 compute-1 sudo[127143]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:44 compute-1 sudo[127266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxwwyabwsnkrmyateifxvzhnfipkmyae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003263.8493674-4206-1931520554495/AnsiballZ_copy.py'
Oct 09 09:47:44 compute-1 sudo[127266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:44 compute-1 python3.9[127268]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003263.8493674-4206-1931520554495/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:44 compute-1 sudo[127266]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:45 compute-1 sudo[127418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrseuacpkvikxzknvrfdkykghemyprrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003264.8307233-4251-64607835647759/AnsiballZ_systemd.py'
Oct 09 09:47:45 compute-1 sudo[127418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:47:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:45.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:47:45 compute-1 python3.9[127420]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:47:45 compute-1 systemd[1]: Reloading.
Oct 09 09:47:45 compute-1 systemd-rc-local-generator[127441]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:45 compute-1 systemd-sysv-generator[127444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:45 compute-1 systemd[1]: Reached target edpm_libvirt.target.
Oct 09 09:47:45 compute-1 sudo[127418]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:45 compute-1 ceph-mon[9795]: pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:47:46 compute-1 sudo[127610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydosfkvwapndncermuavlsomjdbxuqqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003265.7926192-4275-54693310634223/AnsiballZ_systemd.py'
Oct 09 09:47:46 compute-1 sudo[127610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:47:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:46.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:47:46 compute-1 python3.9[127612]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 09 09:47:46 compute-1 systemd[1]: Reloading.
Oct 09 09:47:46 compute-1 systemd-rc-local-generator[127632]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:46 compute-1 systemd-sysv-generator[127636]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:46 compute-1 systemd[1]: Reloading.
Oct 09 09:47:46 compute-1 systemd-rc-local-generator[127668]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:46 compute-1 systemd-sysv-generator[127671]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:46 compute-1 sudo[127610]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:47.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:47 compute-1 sshd-session[71308]: Connection closed by 192.168.122.30 port 41084
Oct 09 09:47:47 compute-1 sshd-session[71305]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:47:47 compute-1 systemd[1]: session-36.scope: Deactivated successfully.
Oct 09 09:47:47 compute-1 systemd[1]: session-36.scope: Consumed 2min 24.446s CPU time.
Oct 09 09:47:47 compute-1 systemd-logind[798]: Session 36 logged out. Waiting for processes to exit.
Oct 09 09:47:47 compute-1 systemd-logind[798]: Removed session 36.
Oct 09 09:47:47 compute-1 ceph-mon[9795]: pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:47:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:48.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:49.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:49 compute-1 ceph-mon[9795]: pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:47:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:47:49 compute-1 sudo[127710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:47:49 compute-1 sudo[127710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:47:49 compute-1 sudo[127710]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:50.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:51.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:51 compute-1 podman[127736]: 2025-10-09 09:47:51.549201522 +0000 UTC m=+0.060469136 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 09 09:47:51 compute-1 ceph-mon[9795]: pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:47:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:52.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:52 compute-1 sshd-session[127759]: Accepted publickey for zuul from 192.168.122.30 port 59730 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:47:52 compute-1 systemd-logind[798]: New session 37 of user zuul.
Oct 09 09:47:52 compute-1 systemd[1]: Started Session 37 of User zuul.
Oct 09 09:47:52 compute-1 sshd-session[127759]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:47:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:53.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:53 compute-1 python3.9[127912]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:47:53 compute-1 ceph-mon[9795]: pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:47:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:54.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:54 compute-1 sudo[128067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mghkiebgpjmbnwfytaryqkfjkqjzmmeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003274.0998907-63-105030571188171/AnsiballZ_file.py'
Oct 09 09:47:54 compute-1 sudo[128067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:54 compute-1 python3.9[128069]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:47:54 compute-1 sudo[128067]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:54 compute-1 sudo[128219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znmzvfwakeadhqwrxtjugyxndqgoozsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003274.723039-63-268063105966318/AnsiballZ_file.py'
Oct 09 09:47:54 compute-1 sudo[128219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:55 compute-1 python3.9[128221]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:47:55 compute-1 sudo[128219]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.002000021s ======
Oct 09 09:47:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:55.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Oct 09 09:47:55 compute-1 sudo[128372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfglfyngdmwjkimezrpwlqnweqlialpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003275.175143-63-49678634771887/AnsiballZ_file.py'
Oct 09 09:47:55 compute-1 sudo[128372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:55 compute-1 python3.9[128374]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:47:55 compute-1 sudo[128372]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:55 compute-1 sudo[128524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heejdxivbsmbqirzkewdrxaxrrmkjqpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003275.6393075-63-43531476288690/AnsiballZ_file.py'
Oct 09 09:47:55 compute-1 sudo[128524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:55 compute-1 ceph-mon[9795]: pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:56 compute-1 python3.9[128526]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 09 09:47:56 compute-1 sudo[128524]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:47:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:56.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:47:56 compute-1 sudo[128676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrrmmcdqvkmmuystnsifwhwewwkgoeon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003276.1200097-63-107170724141019/AnsiballZ_file.py'
Oct 09 09:47:56 compute-1 sudo[128676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:56 compute-1 python3.9[128678]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:47:56 compute-1 sudo[128676]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:47:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:57.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:47:57 compute-1 sudo[128829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmrdalumetvmqzonnixucgbzfyzytkzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003277.0621197-171-281472426458283/AnsiballZ_stat.py'
Oct 09 09:47:57 compute-1 sudo[128829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:57 compute-1 python3.9[128831]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:47:57 compute-1 sudo[128829]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:57 compute-1 ceph-mon[9795]: pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:58.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:58 compute-1 sudo[128983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owjpcerajpyuwxwishkebpkrgrpdokla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003277.735974-195-232644582536333/AnsiballZ_systemd.py'
Oct 09 09:47:58 compute-1 sudo[128983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:58 compute-1 python3.9[128985]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:47:58 compute-1 systemd[1]: Reloading.
Oct 09 09:47:58 compute-1 systemd-sysv-generator[129010]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:58 compute-1 systemd-rc-local-generator[129007]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:58 compute-1 sudo[128983]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:47:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:59.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:59 compute-1 sudo[129172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djrklowbkoyhdpdkccwxvwkhfbjyrfml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003278.9553351-219-98125473800211/AnsiballZ_service_facts.py'
Oct 09 09:47:59 compute-1 sudo[129172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:59 compute-1 python3.9[129174]: ansible-ansible.builtin.service_facts Invoked
Oct 09 09:47:59 compute-1 network[129192]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 09:47:59 compute-1 network[129193]: 'network-scripts' will be removed from distribution in near future.
Oct 09 09:47:59 compute-1 network[129194]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 09:47:59 compute-1 ceph-mon[9795]: pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:00.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:01.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:01 compute-1 sudo[129172]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:01 compute-1 ceph-mon[9795]: pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:01 compute-1 sudo[129467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwszyvdscfjxvouaimzqbpamiqghklon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003281.7699172-243-62331252839017/AnsiballZ_systemd.py'
Oct 09 09:48:02 compute-1 sudo[129467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:48:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:48:02 compute-1 python3.9[129469]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:48:02 compute-1 systemd[1]: Reloading.
Oct 09 09:48:02 compute-1 systemd-rc-local-generator[129491]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:48:02 compute-1 systemd-sysv-generator[129495]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:48:02 compute-1 sudo[129467]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:03 compute-1 python3.9[129656]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:03.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:03 compute-1 sudo[129807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgjkpvbcdhdqgjsgbkisqgvcbpmutknw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003283.3567224-294-262227626706148/AnsiballZ_podman_container.py'
Oct 09 09:48:03 compute-1 sudo[129807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:03 compute-1 python3.9[129809]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 09 09:48:04 compute-1 ceph-mon[9795]: pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:04 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:48:04 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:48:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:04.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:48:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:05.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:06 compute-1 ceph-mon[9795]: pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:06.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:06 compute-1 podman[129819]: 2025-10-09 09:48:06.48051894 +0000 UTC m=+2.523842031 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 09 09:48:06 compute-1 podman[129867]: 2025-10-09 09:48:06.574704994 +0000 UTC m=+0.028165151 container create 314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.5997] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct 09 09:48:06 compute-1 kernel: podman0: port 1(veth0) entered blocking state
Oct 09 09:48:06 compute-1 kernel: podman0: port 1(veth0) entered disabled state
Oct 09 09:48:06 compute-1 kernel: veth0: entered allmulticast mode
Oct 09 09:48:06 compute-1 kernel: veth0: entered promiscuous mode
Oct 09 09:48:06 compute-1 kernel: podman0: port 1(veth0) entered blocking state
Oct 09 09:48:06 compute-1 kernel: podman0: port 1(veth0) entered forwarding state
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6108] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6124] device (veth0): carrier: link connected
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6125] device (podman0): carrier: link connected
Oct 09 09:48:06 compute-1 systemd-udevd[129897]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:48:06 compute-1 systemd-udevd[129894]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:48:06 compute-1 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 09 09:48:06 compute-1 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6420] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6425] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6429] device (podman0): Activation: starting connection 'podman0' (067949ec-2c39-4a84-9a73-11234d5a389d)
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6430] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6431] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6432] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6434] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 09 09:48:06 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 09 09:48:06 compute-1 podman[129867]: 2025-10-09 09:48:06.562215549 +0000 UTC m=+0.015675706 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 09 09:48:06 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6635] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6638] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6643] device (podman0): Activation: successful, device activated.
Oct 09 09:48:06 compute-1 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 09 09:48:06 compute-1 systemd[1]: Started libpod-conmon-314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159.scope.
Oct 09 09:48:06 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:48:06 compute-1 podman[129867]: 2025-10-09 09:48:06.833009994 +0000 UTC m=+0.286470172 container init 314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:48:06 compute-1 podman[129867]: 2025-10-09 09:48:06.838148786 +0000 UTC m=+0.291608933 container start 314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:48:06 compute-1 podman[129867]: 2025-10-09 09:48:06.839504122 +0000 UTC m=+0.292964279 container attach 314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:48:06 compute-1 iscsid_config[130018]: iqn.1994-05.com.redhat:ef5dd0d75ccc
Oct 09 09:48:06 compute-1 systemd[1]: libpod-314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159.scope: Deactivated successfully.
Oct 09 09:48:06 compute-1 podman[129867]: 2025-10-09 09:48:06.841276285 +0000 UTC m=+0.294736443 container died 314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:48:06 compute-1 kernel: podman0: port 1(veth0) entered disabled state
Oct 09 09:48:06 compute-1 kernel: veth0 (unregistering): left allmulticast mode
Oct 09 09:48:06 compute-1 kernel: veth0 (unregistering): left promiscuous mode
Oct 09 09:48:06 compute-1 kernel: podman0: port 1(veth0) entered disabled state
Oct 09 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.8778] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 09:48:07 compute-1 systemd[1]: run-netns-netns\x2dfb309e41\x2dcd9d\x2de926\x2d2704\x2d519c9dc048d5.mount: Deactivated successfully.
Oct 09 09:48:07 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159-userdata-shm.mount: Deactivated successfully.
Oct 09 09:48:07 compute-1 podman[129867]: 2025-10-09 09:48:07.1361214 +0000 UTC m=+0.589581557 container remove 314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 09 09:48:07 compute-1 python3.9[129809]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f /usr/sbin/iscsi-iname
Oct 09 09:48:07 compute-1 systemd[1]: libpod-conmon-314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159.scope: Deactivated successfully.
Oct 09 09:48:07 compute-1 python3.9[129809]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 09 09:48:07 compute-1 sudo[129807]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:07.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:07 compute-1 sudo[130252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyjrbccnkytndhotwoiuonsffwwfruyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003287.3659604-318-280030819220850/AnsiballZ_stat.py'
Oct 09 09:48:07 compute-1 sudo[130252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:07 compute-1 systemd[1]: var-lib-containers-storage-overlay-d326a3a5531b39b0223e7ba13637b2c394d3ee4c081ebe0095898470adf76f4d-merged.mount: Deactivated successfully.
Oct 09 09:48:07 compute-1 python3.9[130254]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:07 compute-1 sudo[130252]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:08 compute-1 ceph-mon[9795]: pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:08.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:08 compute-1 sudo[130375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtngnssbylrdlqpxtbbbalkiynqqtaon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003287.3659604-318-280030819220850/AnsiballZ_copy.py'
Oct 09 09:48:08 compute-1 sudo[130375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:08 compute-1 python3.9[130377]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003287.3659604-318-280030819220850/.source.iscsi _original_basename=.ab755stf follow=False checksum=e75d4b19d897bf62fe4bce81ee6c77032a8ac0d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:08 compute-1 sudo[130375]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:08 compute-1 sudo[130527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuqcgaxpazexbrjsvgrgfmbqvjnaahad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003288.4308755-363-206079161731059/AnsiballZ_file.py'
Oct 09 09:48:08 compute-1 sudo[130527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:08 compute-1 python3.9[130529]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:08 compute-1 sudo[130527]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:09.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:09 compute-1 python3.9[130680]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:10 compute-1 ceph-mon[9795]: pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:48:10.027 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:48:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:48:10.027 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:48:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:48:10.027 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:48:10 compute-1 sudo[130806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:48:10 compute-1 sudo[130806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:10 compute-1 sudo[130806]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:10 compute-1 sudo[130856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uckmzlrkaaeodqtwpnqdoagqfolurxbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003289.7279744-414-100972997235531/AnsiballZ_lineinfile.py'
Oct 09 09:48:10 compute-1 sudo[130856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:10.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:10 compute-1 python3.9[130859]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:10 compute-1 sudo[130856]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:10 compute-1 sudo[131009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzeqwlrkwhgefvgrtrttxlwddiwjqigp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003290.4863992-441-56544431832139/AnsiballZ_file.py'
Oct 09 09:48:10 compute-1 sudo[131009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:10 compute-1 python3.9[131011]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:10 compute-1 sudo[131009]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:11 compute-1 sudo[131169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uviqrfdztlakpsdgimullrtuheumspon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003291.0180442-465-112732807766385/AnsiballZ_stat.py'
Oct 09 09:48:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:11.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:11 compute-1 sudo[131169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:11 compute-1 podman[131135]: 2025-10-09 09:48:11.232299562 +0000 UTC m=+0.039091909 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 09 09:48:11 compute-1 python3.9[131179]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:11 compute-1 sudo[131169]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:11 compute-1 sudo[131256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puorxjgcicwxvrssqomkcjbpxeqpztpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003291.0180442-465-112732807766385/AnsiballZ_file.py'
Oct 09 09:48:11 compute-1 sudo[131256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:11 compute-1 python3.9[131258]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:11 compute-1 sudo[131256]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:12 compute-1 sudo[131408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzlwehunkgnsdbsvxhflgkpihussrzym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003291.8222246-465-104378125909651/AnsiballZ_stat.py'
Oct 09 09:48:12 compute-1 sudo[131408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:12 compute-1 ceph-mon[9795]: pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:12.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:12 compute-1 python3.9[131410]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:12 compute-1 sudo[131408]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:12 compute-1 sudo[131486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndhgcfihhesgbnaxfjngmekptplseaam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003291.8222246-465-104378125909651/AnsiballZ_file.py'
Oct 09 09:48:12 compute-1 sudo[131486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:12 compute-1 python3.9[131488]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:12 compute-1 sudo[131486]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:12 compute-1 sudo[131638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfkedytlrjamvlyamzxtkjuaszguxarv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003292.6974003-534-66172161814690/AnsiballZ_file.py'
Oct 09 09:48:12 compute-1 sudo[131638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:13 compute-1 python3.9[131640]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:13 compute-1 sudo[131638]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:48:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:13.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:48:13 compute-1 sudo[131791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvviscfujrpdhcyvhiyeuwbdmpbhrxsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003293.4343634-558-38104038837152/AnsiballZ_stat.py'
Oct 09 09:48:13 compute-1 sudo[131791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:13 compute-1 python3.9[131793]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:13 compute-1 sudo[131791]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:13 compute-1 sudo[131869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiyzipjdoskcdpwpzpwwmnsvkikwyfoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003293.4343634-558-38104038837152/AnsiballZ_file.py'
Oct 09 09:48:13 compute-1 sudo[131869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:14 compute-1 ceph-mon[9795]: pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:14.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:14 compute-1 python3.9[131871]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:14 compute-1 sudo[131869]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:14 compute-1 sudo[132021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxyvcrrsbpkrmodgjhtmzwrnuxrmyquj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003294.3090756-594-280643834061285/AnsiballZ_stat.py'
Oct 09 09:48:14 compute-1 sudo[132021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:14 compute-1 python3.9[132023]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:14 compute-1 sudo[132021]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:14 compute-1 sudo[132099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttbmlznfjrkcgxfhrqcmscxdtqoottmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003294.3090756-594-280643834061285/AnsiballZ_file.py'
Oct 09 09:48:14 compute-1 sudo[132099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:14 compute-1 python3.9[132101]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:14 compute-1 sudo[132099]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:48:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:15.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:48:15 compute-1 sudo[132252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eskzneijvqksynreetfvccqivxqikdea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003295.2135947-630-91825050893513/AnsiballZ_systemd.py'
Oct 09 09:48:15 compute-1 sudo[132252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:15 compute-1 python3.9[132254]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:48:15 compute-1 systemd[1]: Reloading.
Oct 09 09:48:15 compute-1 systemd-rc-local-generator[132275]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:48:15 compute-1 systemd-sysv-generator[132278]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:48:15 compute-1 sudo[132252]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:16 compute-1 ceph-mon[9795]: pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:16.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:16 compute-1 sudo[132441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kypisapitgjlhuphpoyjqxqyiikzkwou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003296.0904098-654-163613448190810/AnsiballZ_stat.py'
Oct 09 09:48:16 compute-1 sudo[132441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:16 compute-1 python3.9[132443]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:16 compute-1 sudo[132441]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:16 compute-1 sudo[132519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwkqkdizkdwejjzxqpbsilcajodzfknj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003296.0904098-654-163613448190810/AnsiballZ_file.py'
Oct 09 09:48:16 compute-1 sudo[132519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:16 compute-1 python3.9[132521]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:16 compute-1 sudo[132519]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:16 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 09 09:48:17 compute-1 sudo[132671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brawoercwetomvfbytwdrpluukiozzgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003296.939857-690-56223276163542/AnsiballZ_stat.py'
Oct 09 09:48:17 compute-1 sudo[132671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:17.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:17 compute-1 python3.9[132673]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:17 compute-1 sudo[132671]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:17 compute-1 sudo[132750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ninojanvwopirgcrhmjfqqbictgbwxyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003296.939857-690-56223276163542/AnsiballZ_file.py'
Oct 09 09:48:17 compute-1 sudo[132750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:17 compute-1 python3.9[132752]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:17 compute-1 sudo[132750]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:18 compute-1 sudo[132902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbxjzfculvspqliiajnsgybegiuqiklu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003297.8041928-726-218505856845126/AnsiballZ_systemd.py'
Oct 09 09:48:18 compute-1 sudo[132902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:18 compute-1 ceph-mon[9795]: pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:18.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:18 compute-1 python3.9[132904]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:48:18 compute-1 systemd[1]: Reloading.
Oct 09 09:48:18 compute-1 systemd-rc-local-generator[132925]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:48:18 compute-1 systemd-sysv-generator[132928]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:48:18 compute-1 systemd[1]: Starting Create netns directory...
Oct 09 09:48:18 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 09:48:18 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 09:48:18 compute-1 systemd[1]: Finished Create netns directory.
Oct 09 09:48:18 compute-1 sudo[132902]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:19 compute-1 sudo[133095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlrbwvcwnmsnzjilnmzlfultxsumvpgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003298.9910102-756-36457880991255/AnsiballZ_file.py'
Oct 09 09:48:19 compute-1 sudo[133095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:48:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:19.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:48:19 compute-1 python3.9[133097]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:19 compute-1 sudo[133095]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:19 compute-1 sudo[133248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cosynlwqetobdavhotjtmrtqaldylrjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003299.5134838-780-276881531271549/AnsiballZ_stat.py'
Oct 09 09:48:19 compute-1 sudo[133248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:19 compute-1 python3.9[133250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:19 compute-1 sudo[133248]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:20 compute-1 ceph-mon[9795]: pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:48:20 compute-1 sudo[133371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtwgvngqxoadlmygovlzlqnxqzqofzee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003299.5134838-780-276881531271549/AnsiballZ_copy.py'
Oct 09 09:48:20 compute-1 sudo[133371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:20.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:20 compute-1 python3.9[133373]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003299.5134838-780-276881531271549/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:20 compute-1 sudo[133371]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:20 compute-1 sudo[133523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvxyzwtwilsisbddtomvsnaholvlqrgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003300.6977-831-2400749771816/AnsiballZ_file.py'
Oct 09 09:48:20 compute-1 sudo[133523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:21 compute-1 python3.9[133525]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:21 compute-1 sudo[133523]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:21.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:21 compute-1 sudo[133676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qunbibizcexuowzmycdinacmurpxbjnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003301.226468-855-141665857840063/AnsiballZ_stat.py'
Oct 09 09:48:21 compute-1 sudo[133676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:21 compute-1 python3.9[133678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:21 compute-1 sudo[133676]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:21 compute-1 sudo[133814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bohohptbodnffaefabbpjzcdphohmxkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003301.226468-855-141665857840063/AnsiballZ_copy.py'
Oct 09 09:48:21 compute-1 sudo[133814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:21 compute-1 podman[133773]: 2025-10-09 09:48:21.821129742 +0000 UTC m=+0.052841299 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 09 09:48:21 compute-1 python3.9[133821]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003301.226468-855-141665857840063/.source.json _original_basename=.p6769hyl follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:21 compute-1 sudo[133814]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:22 compute-1 ceph-mon[9795]: pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:22.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:22 compute-1 sudo[133975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntouqnemluvjpvjoxurxxyeofipkfxky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003302.1250672-900-125023150947250/AnsiballZ_file.py'
Oct 09 09:48:22 compute-1 sudo[133975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:22 compute-1 python3.9[133977]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:22 compute-1 sudo[133975]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:22 compute-1 sudo[134127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwwuldlkjinavwocoiznsiiewfmmnttw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003302.6627018-924-44853881032054/AnsiballZ_stat.py'
Oct 09 09:48:22 compute-1 sudo[134127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:23 compute-1 sudo[134127]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:23 compute-1 sudo[134250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npbqdflwmmtsbhsvqhaibiprsxbanmse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003302.6627018-924-44853881032054/AnsiballZ_copy.py'
Oct 09 09:48:23 compute-1 sudo[134250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:23 compute-1 sudo[134250]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:24 compute-1 sudo[134403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfulbkgbbtyxeezpaxenizvgsbomazni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003303.736153-975-95796774977046/AnsiballZ_container_config_data.py'
Oct 09 09:48:24 compute-1 sudo[134403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:24 compute-1 ceph-mon[9795]: pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:48:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:24.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:48:24 compute-1 python3.9[134405]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 09 09:48:24 compute-1 sudo[134403]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:24 compute-1 sudo[134555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvvzgnjpaobxfpjgmwhxnofdilvpmjmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003304.4504983-1002-226214098009966/AnsiballZ_container_config_hash.py'
Oct 09 09:48:24 compute-1 sudo[134555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:24 compute-1 python3.9[134557]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 09:48:24 compute-1 sudo[134555]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:48:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:25.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:48:25 compute-1 sudo[134708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzztucnzdavsqzhehglwltiiaoxdszaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003305.1788604-1029-252603477129402/AnsiballZ_podman_container_info.py'
Oct 09 09:48:25 compute-1 sudo[134708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:25 compute-1 python3.9[134710]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 09 09:48:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:25 compute-1 sudo[134708]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:26 compute-1 ceph-mon[9795]: pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:26.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:27 compute-1 sudo[134807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:48:27 compute-1 sudo[134807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:27 compute-1 sudo[134807]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:27 compute-1 sudo[134855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:48:27 compute-1 sudo[134855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:27 compute-1 sudo[134930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnixqfugcotmbyohevhpqiqgzwdgcghx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760003306.727705-1068-55684837189495/AnsiballZ_edpm_container_manage.py'
Oct 09 09:48:27 compute-1 sudo[134930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:27.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:27 compute-1 python3[134932]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 09:48:27 compute-1 podman[135011]: 2025-10-09 09:48:27.528960513 +0000 UTC m=+0.042501624 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:48:27 compute-1 podman[135028]: 2025-10-09 09:48:27.536960701 +0000 UTC m=+0.029074082 container create dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 09 09:48:27 compute-1 podman[135028]: 2025-10-09 09:48:27.523699754 +0000 UTC m=+0.015813155 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 09 09:48:27 compute-1 python3[134932]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 09 09:48:27 compute-1 sudo[134930]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:27 compute-1 podman[135066]: 2025-10-09 09:48:27.661763227 +0000 UTC m=+0.047271415 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:48:27 compute-1 podman[135011]: 2025-10-09 09:48:27.665806558 +0000 UTC m=+0.179347670 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:48:27 compute-1 podman[135241]: 2025-10-09 09:48:27.96732198 +0000 UTC m=+0.036361465 container exec 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:48:28 compute-1 sudo[135321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pltogyiljxozlezriaqyjlrlwamfvnmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003307.8186948-1092-126093410935414/AnsiballZ_stat.py'
Oct 09 09:48:28 compute-1 sudo[135321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:28 compute-1 podman[135286]: 2025-10-09 09:48:28.028825334 +0000 UTC m=+0.049003264 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:48:28 compute-1 podman[135241]: 2025-10-09 09:48:28.031179727 +0000 UTC m=+0.100219202 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:48:28 compute-1 ceph-mon[9795]: pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:48:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:28.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:48:28 compute-1 python3.9[135323]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:28 compute-1 sudo[135321]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:28 compute-1 podman[135432]: 2025-10-09 09:48:28.365980288 +0000 UTC m=+0.034568462 container exec 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct 09 09:48:28 compute-1 podman[135432]: 2025-10-09 09:48:28.376875349 +0000 UTC m=+0.045463524 container exec_died 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct 09 09:48:28 compute-1 podman[135483]: 2025-10-09 09:48:28.511039084 +0000 UTC m=+0.034715270 container exec 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, name=keepalived, description=keepalived for Ceph, distribution-scope=public, architecture=x86_64, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4)
Oct 09 09:48:28 compute-1 podman[135483]: 2025-10-09 09:48:28.520891959 +0000 UTC m=+0.044568124 container exec_died 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, description=keepalived for Ceph, io.buildah.version=1.28.2, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, release=1793)
Oct 09 09:48:28 compute-1 sudo[134855]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:28 compute-1 sudo[135533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:48:28 compute-1 sudo[135533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:28 compute-1 sudo[135533]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:28 compute-1 sudo[135587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:48:28 compute-1 sudo[135587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:28 compute-1 sudo[135685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fznmyjuyvfylrssdalnwplqvyvhvydvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003308.5607424-1119-93172423576324/AnsiballZ_file.py'
Oct 09 09:48:28 compute-1 sudo[135685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:28 compute-1 python3.9[135688]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:28 compute-1 sudo[135685]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:29 compute-1 sudo[135587]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:29 compute-1 sudo[135791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhozlsiqmibyuztjhqwpmchhrytkpquk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003308.5607424-1119-93172423576324/AnsiballZ_stat.py'
Oct 09 09:48:29 compute-1 sudo[135791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:29.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:29 compute-1 python3.9[135793]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:29 compute-1 sudo[135791]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:29 compute-1 ceph-mon[9795]: pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:48:29 compute-1 sudo[135943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyaouavlxjglrtsaxvvhtwrdpalskjql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003309.3536878-1119-277808438221150/AnsiballZ_copy.py'
Oct 09 09:48:29 compute-1 sudo[135943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:29 compute-1 python3.9[135945]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003309.3536878-1119-277808438221150/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:29 compute-1 sudo[135943]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:30 compute-1 sudo[136019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldiipzcxeqvhamrjnakfkelnehcfbshi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003309.3536878-1119-277808438221150/AnsiballZ_systemd.py'
Oct 09 09:48:30 compute-1 sudo[136019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:30 compute-1 sudo[136022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:48:30 compute-1 sudo[136022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:30 compute-1 sudo[136022]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:30.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:30 compute-1 python3.9[136021]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:48:30 compute-1 systemd[1]: Reloading.
Oct 09 09:48:30 compute-1 systemd-rc-local-generator[136071]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:48:30 compute-1 systemd-sysv-generator[136074]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:48:30 compute-1 sudo[136019]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:30 compute-1 ceph-mon[9795]: pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:48:30 compute-1 ceph-mon[9795]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct 09 09:48:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:30 compute-1 sudo[136155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgycgbnjoqpvvhmlvijrbobpjwrzliux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003309.3536878-1119-277808438221150/AnsiballZ_systemd.py'
Oct 09 09:48:30 compute-1 sudo[136155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:30 compute-1 python3.9[136157]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:48:30 compute-1 systemd[1]: Reloading.
Oct 09 09:48:31 compute-1 systemd-rc-local-generator[136180]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:48:31 compute-1 systemd-sysv-generator[136188]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:48:31 compute-1 systemd[1]: Starting iscsid container...
Oct 09 09:48:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:31 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:48:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/171c25a90a02b6960144863e473dc8c4aba64cc99d2a8c52edc8a42b57737968/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 09:48:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/171c25a90a02b6960144863e473dc8c4aba64cc99d2a8c52edc8a42b57737968/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 09 09:48:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/171c25a90a02b6960144863e473dc8c4aba64cc99d2a8c52edc8a42b57737968/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 09:48:31 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa.
Oct 09 09:48:31 compute-1 podman[136197]: 2025-10-09 09:48:31.336118059 +0000 UTC m=+0.078977003 container init dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 09 09:48:31 compute-1 iscsid[136209]: + sudo -E kolla_set_configs
Oct 09 09:48:31 compute-1 sudo[136215]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 09 09:48:31 compute-1 podman[136197]: 2025-10-09 09:48:31.358595409 +0000 UTC m=+0.101454334 container start dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:48:31 compute-1 podman[136197]: iscsid
Oct 09 09:48:31 compute-1 systemd[1]: Created slice User Slice of UID 0.
Oct 09 09:48:31 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 09 09:48:31 compute-1 systemd[1]: Started iscsid container.
Oct 09 09:48:31 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 09 09:48:31 compute-1 systemd[1]: Starting User Manager for UID 0...
Oct 09 09:48:31 compute-1 sudo[136155]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:31 compute-1 systemd[136228]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 09 09:48:31 compute-1 podman[136216]: 2025-10-09 09:48:31.437592559 +0000 UTC m=+0.071926606 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 09 09:48:31 compute-1 systemd[1]: dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa-5d5396f2cc627b69.service: Main process exited, code=exited, status=1/FAILURE
Oct 09 09:48:31 compute-1 systemd[1]: dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa-5d5396f2cc627b69.service: Failed with result 'exit-code'.
Oct 09 09:48:31 compute-1 systemd[136228]: Queued start job for default target Main User Target.
Oct 09 09:48:31 compute-1 systemd[136228]: Created slice User Application Slice.
Oct 09 09:48:31 compute-1 systemd[136228]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 09 09:48:31 compute-1 systemd[136228]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 09:48:31 compute-1 systemd[136228]: Reached target Paths.
Oct 09 09:48:31 compute-1 systemd[136228]: Reached target Timers.
Oct 09 09:48:31 compute-1 systemd[136228]: Starting D-Bus User Message Bus Socket...
Oct 09 09:48:31 compute-1 systemd[136228]: Starting Create User's Volatile Files and Directories...
Oct 09 09:48:31 compute-1 systemd[136228]: Finished Create User's Volatile Files and Directories.
Oct 09 09:48:31 compute-1 systemd[136228]: Listening on D-Bus User Message Bus Socket.
Oct 09 09:48:31 compute-1 systemd[136228]: Reached target Sockets.
Oct 09 09:48:31 compute-1 systemd[136228]: Reached target Basic System.
Oct 09 09:48:31 compute-1 systemd[1]: Started User Manager for UID 0.
Oct 09 09:48:31 compute-1 systemd[136228]: Reached target Main User Target.
Oct 09 09:48:31 compute-1 systemd[136228]: Startup finished in 95ms.
Oct 09 09:48:31 compute-1 systemd[1]: Started Session c3 of User root.
Oct 09 09:48:31 compute-1 sudo[136215]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 09:48:31 compute-1 iscsid[136209]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 09:48:31 compute-1 iscsid[136209]: INFO:__main__:Validating config file
Oct 09 09:48:31 compute-1 iscsid[136209]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 09:48:31 compute-1 iscsid[136209]: INFO:__main__:Writing out command to execute
Oct 09 09:48:31 compute-1 sudo[136215]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:31 compute-1 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 09 09:48:31 compute-1 iscsid[136209]: ++ cat /run_command
Oct 09 09:48:31 compute-1 iscsid[136209]: + CMD='/usr/sbin/iscsid -f'
Oct 09 09:48:31 compute-1 iscsid[136209]: + ARGS=
Oct 09 09:48:31 compute-1 iscsid[136209]: + sudo kolla_copy_cacerts
Oct 09 09:48:31 compute-1 sudo[136276]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 09 09:48:31 compute-1 systemd[1]: Started Session c4 of User root.
Oct 09 09:48:31 compute-1 sudo[136276]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 09:48:31 compute-1 sudo[136276]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:31 compute-1 iscsid[136209]: + [[ ! -n '' ]]
Oct 09 09:48:31 compute-1 iscsid[136209]: + . kolla_extend_start
Oct 09 09:48:31 compute-1 iscsid[136209]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 09 09:48:31 compute-1 iscsid[136209]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 09 09:48:31 compute-1 iscsid[136209]: Running command: '/usr/sbin/iscsid -f'
Oct 09 09:48:31 compute-1 iscsid[136209]: + umask 0022
Oct 09 09:48:31 compute-1 iscsid[136209]: + exec /usr/sbin/iscsid -f
Oct 09 09:48:31 compute-1 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 09 09:48:31 compute-1 kernel: Loading iSCSI transport class v2.0-870.
Oct 09 09:48:31 compute-1 python3.9[136411]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:32.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:32 compute-1 sudo[136561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sticvjmphsdeyzjmedchramrkyvdyifk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003312.1509392-1230-99867643905718/AnsiballZ_file.py'
Oct 09 09:48:32 compute-1 sudo[136561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:32 compute-1 python3.9[136563]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:32 compute-1 sudo[136561]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:32 compute-1 ceph-mon[9795]: pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:48:32 compute-1 sudo[136588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:48:32 compute-1 sudo[136588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:32 compute-1 sudo[136588]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:33 compute-1 sudo[136738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puwhvoeevkltvrlhawyrpcbnikisbedk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003312.9078114-1263-83779561929090/AnsiballZ_service_facts.py'
Oct 09 09:48:33 compute-1 sudo[136738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:48:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:33.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:48:33 compute-1 python3.9[136740]: ansible-ansible.builtin.service_facts Invoked
Oct 09 09:48:33 compute-1 network[136757]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 09:48:33 compute-1 network[136758]: 'network-scripts' will be removed from distribution in near future.
Oct 09 09:48:33 compute-1 network[136759]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 09:48:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:48:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:34.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:48:34 compute-1 ceph-mon[9795]: pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:48:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:48:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:35.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:48:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:36.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:48:36 compute-1 sudo[136738]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:36 compute-1 ceph-mon[9795]: pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:48:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:37.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:37 compute-1 sudo[137035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckqpfsqjsbntgyzqrjcwkofvqosfjbfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003317.6913419-1293-175324946388016/AnsiballZ_file.py'
Oct 09 09:48:37 compute-1 sudo[137035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:38 compute-1 python3.9[137037]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 09 09:48:38 compute-1 sudo[137035]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:38.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:38 compute-1 sudo[137187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzxaxmfaihbmcndoogolmcwqxcwmlnwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003318.4282827-1317-114952162036816/AnsiballZ_modprobe.py'
Oct 09 09:48:38 compute-1 sudo[137187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:38 compute-1 ceph-mon[9795]: pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:48:38 compute-1 python3.9[137189]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 09 09:48:38 compute-1 sudo[137187]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:39.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:39 compute-1 sudo[137343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgtxumkuurkvddegnojxbyrvrjryponk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003319.0870924-1341-34865914720122/AnsiballZ_stat.py'
Oct 09 09:48:39 compute-1 sudo[137343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:39 compute-1 python3.9[137345]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:39 compute-1 sudo[137343]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:39 compute-1 sudo[137467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkghhfrpqfsxmhwfydorclzxyjatkrro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003319.0870924-1341-34865914720122/AnsiballZ_copy.py'
Oct 09 09:48:39 compute-1 sudo[137467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:39 compute-1 python3.9[137469]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003319.0870924-1341-34865914720122/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:39 compute-1 sudo[137467]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:40.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:40 compute-1 sudo[137619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncdkzdmonwpxjnmyoyhooabbxfqfulap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003320.246849-1389-200379805933928/AnsiballZ_lineinfile.py'
Oct 09 09:48:40 compute-1 sudo[137619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:40 compute-1 python3.9[137621]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:40 compute-1 sudo[137619]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:40 compute-1 ceph-mon[9795]: pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:48:40 compute-1 sudo[137771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vklxiqfnhyttfycsxbbqpokweatsjfed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003320.8008895-1413-259297413209783/AnsiballZ_systemd.py'
Oct 09 09:48:40 compute-1 sudo[137771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:41 compute-1 python3.9[137773]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:48:41 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 09 09:48:41 compute-1 systemd[1]: Stopped Load Kernel Modules.
Oct 09 09:48:41 compute-1 systemd[1]: Stopping Load Kernel Modules...
Oct 09 09:48:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:41.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:41 compute-1 systemd[1]: Starting Load Kernel Modules...
Oct 09 09:48:41 compute-1 systemd[1]: Finished Load Kernel Modules.
Oct 09 09:48:41 compute-1 podman[137775]: 2025-10-09 09:48:41.302336338 +0000 UTC m=+0.042253576 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 09 09:48:41 compute-1 sudo[137771]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:41 compute-1 systemd[1]: Stopping User Manager for UID 0...
Oct 09 09:48:41 compute-1 systemd[136228]: Activating special unit Exit the Session...
Oct 09 09:48:41 compute-1 systemd[136228]: Stopped target Main User Target.
Oct 09 09:48:41 compute-1 systemd[136228]: Stopped target Basic System.
Oct 09 09:48:41 compute-1 systemd[136228]: Stopped target Paths.
Oct 09 09:48:41 compute-1 systemd[136228]: Stopped target Sockets.
Oct 09 09:48:41 compute-1 systemd[136228]: Stopped target Timers.
Oct 09 09:48:41 compute-1 systemd[136228]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 09:48:41 compute-1 systemd[136228]: Closed D-Bus User Message Bus Socket.
Oct 09 09:48:41 compute-1 systemd[136228]: Stopped Create User's Volatile Files and Directories.
Oct 09 09:48:41 compute-1 systemd[136228]: Removed slice User Application Slice.
Oct 09 09:48:41 compute-1 systemd[136228]: Reached target Shutdown.
Oct 09 09:48:41 compute-1 systemd[136228]: Finished Exit the Session.
Oct 09 09:48:41 compute-1 systemd[136228]: Reached target Exit the Session.
Oct 09 09:48:41 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Oct 09 09:48:41 compute-1 systemd[1]: Stopped User Manager for UID 0.
Oct 09 09:48:41 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 09 09:48:41 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 09 09:48:41 compute-1 sudo[137945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkrihzoreucumwhuzcajcmtpucbszuuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003321.467571-1437-11806665616531/AnsiballZ_file.py'
Oct 09 09:48:41 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 09 09:48:41 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 09 09:48:41 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Oct 09 09:48:41 compute-1 sudo[137945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:41 compute-1 python3.9[137947]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:41 compute-1 sudo[137945]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:42.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:42 compute-1 sudo[138097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymekjkzpyngijasxukncxlhpduktkyia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003322.2665162-1464-58842480941888/AnsiballZ_stat.py'
Oct 09 09:48:42 compute-1 sudo[138097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:42 compute-1 python3.9[138099]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:42 compute-1 sudo[138097]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:42 compute-1 ceph-mon[9795]: pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:43 compute-1 sudo[138249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iworwaceuryvrfelzzsxbcdckodmlacb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003322.8644242-1491-72426598380568/AnsiballZ_stat.py'
Oct 09 09:48:43 compute-1 sudo[138249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:43 compute-1 python3.9[138251]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:43 compute-1 sudo[138249]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:48:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:43.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:48:43 compute-1 sudo[138402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tucfpkigiqcpahwskxsjoobbsfgpxtzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003323.429502-1515-100436937641012/AnsiballZ_stat.py'
Oct 09 09:48:43 compute-1 sudo[138402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:43 compute-1 python3.9[138404]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:43 compute-1 sudo[138402]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:44 compute-1 sudo[138525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfolvrkfohinqjjtrrdpytibzusakzsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003323.429502-1515-100436937641012/AnsiballZ_copy.py'
Oct 09 09:48:44 compute-1 sudo[138525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:48:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:44.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:48:44 compute-1 python3.9[138527]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003323.429502-1515-100436937641012/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:44 compute-1 sudo[138525]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:44 compute-1 sudo[138677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djpmsnsopfaptxizuiwqpxkauwzkrvru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003324.428769-1560-24891204948967/AnsiballZ_command.py'
Oct 09 09:48:44 compute-1 sudo[138677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:44 compute-1 ceph-mon[9795]: pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:44 compute-1 python3.9[138679]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:48:44 compute-1 sudo[138677]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:48:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:45.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:48:45 compute-1 sudo[138830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuxhuyhzragvqwayhvljroudngtqyaoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003325.1173594-1584-262440375247723/AnsiballZ_lineinfile.py'
Oct 09 09:48:45 compute-1 sudo[138830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:45 compute-1 python3.9[138832]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:45 compute-1 sudo[138830]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:45 compute-1 sudo[138983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghihbszdsomwzkbyzstyvjhubixbthkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003325.6472516-1608-159196645882866/AnsiballZ_replace.py'
Oct 09 09:48:45 compute-1 sudo[138983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:46 compute-1 python3.9[138985]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:48:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:46.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:48:46 compute-1 sudo[138983]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:46 compute-1 sudo[139135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kquhbceuwxkkxwzepkshmitdpgslhpnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003326.3288288-1632-150832059820968/AnsiballZ_replace.py'
Oct 09 09:48:46 compute-1 sudo[139135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:46 compute-1 python3.9[139137]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:46 compute-1 sudo[139135]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:46 compute-1 ceph-mon[9795]: pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:47 compute-1 sudo[139287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccnnsppzmlrsewuzuhyxfypkrhfbpkng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003326.9183073-1659-251185200775897/AnsiballZ_lineinfile.py'
Oct 09 09:48:47 compute-1 sudo[139287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:47 compute-1 python3.9[139289]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:47 compute-1 sudo[139287]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:47.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:47 compute-1 sudo[139440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gudjjbzcdxtlupmbtmpsdkgeonqgvweu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003327.370655-1659-245637968264575/AnsiballZ_lineinfile.py'
Oct 09 09:48:47 compute-1 sudo[139440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:47 compute-1 python3.9[139442]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:47 compute-1 sudo[139440]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:47 compute-1 sudo[139592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjczccriqkzbycjkcwbjzfrgnbglttos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003327.818379-1659-94820889246803/AnsiballZ_lineinfile.py'
Oct 09 09:48:47 compute-1 sudo[139592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:48.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:48 compute-1 python3.9[139594]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:48 compute-1 sudo[139592]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:48 compute-1 sudo[139744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chadogwdrknhjbtxbzokirocsnvmpszr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003328.2612426-1659-249002929108897/AnsiballZ_lineinfile.py'
Oct 09 09:48:48 compute-1 sudo[139744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:48 compute-1 python3.9[139746]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:48 compute-1 sudo[139744]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:48 compute-1 ceph-mon[9795]: pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:49 compute-1 sudo[139896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baxkckstpwpytxxyfotalrludcjdqzhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003328.8793156-1746-146098969759293/AnsiballZ_stat.py'
Oct 09 09:48:49 compute-1 sudo[139896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:49 compute-1 python3.9[139898]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:49 compute-1 sudo[139896]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:49.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:49 compute-1 sudo[140051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqwbcprozltkebnoxbigdmobhwbvunrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003329.398346-1770-14419336692620/AnsiballZ_file.py'
Oct 09 09:48:49 compute-1 sudo[140051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:49 compute-1 python3.9[140053]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:49 compute-1 sudo[140051]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:48:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:50.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:50 compute-1 sudo[140153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:48:50 compute-1 sudo[140153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:50 compute-1 sudo[140153]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:50 compute-1 sudo[140228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbixugxmnkqmtwkkoauoekhtvzfkzvus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003330.0377467-1797-239130730586196/AnsiballZ_file.py'
Oct 09 09:48:50 compute-1 sudo[140228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:50 compute-1 python3.9[140230]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:50 compute-1 sudo[140228]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:50 compute-1 sudo[140380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whguyowfiixglymdlwrjrmnxbowrjubo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003330.5878308-1821-46885500502846/AnsiballZ_stat.py'
Oct 09 09:48:50 compute-1 sudo[140380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:50 compute-1 ceph-mon[9795]: pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:50 compute-1 python3.9[140382]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:50 compute-1 sudo[140380]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:51 compute-1 sudo[140458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dagmhecdfpconmhodbhqezegnzwkuyoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003330.5878308-1821-46885500502846/AnsiballZ_file.py'
Oct 09 09:48:51 compute-1 sudo[140458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:51 compute-1 python3.9[140460]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:51.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:51 compute-1 sudo[140458]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:51 compute-1 sudo[140611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeqihqzfdingyfckpkynsztvrbykjsmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003331.3761024-1821-104773969144486/AnsiballZ_stat.py'
Oct 09 09:48:51 compute-1 sudo[140611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:51 compute-1 python3.9[140613]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:51 compute-1 sudo[140611]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:51 compute-1 sudo[140698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prieudmfnpkygshhrtygocwecvzialoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003331.3761024-1821-104773969144486/AnsiballZ_file.py'
Oct 09 09:48:51 compute-1 sudo[140698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:51 compute-1 podman[140663]: 2025-10-09 09:48:51.924260552 +0000 UTC m=+0.057962823 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:48:52 compute-1 python3.9[140707]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:52 compute-1 sudo[140698]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:52.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:52 compute-1 sudo[140864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yedlqknmtcrvmcbwqfqlwbogzyraqorl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003332.238347-1890-143688055057239/AnsiballZ_file.py'
Oct 09 09:48:52 compute-1 sudo[140864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:52 compute-1 python3.9[140866]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:52 compute-1 sudo[140864]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:52 compute-1 ceph-mon[9795]: pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:52 compute-1 sudo[141016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydyjexohpzkfqrmpqmilksaztdnnrxdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003332.779807-1914-91774902908002/AnsiballZ_stat.py'
Oct 09 09:48:52 compute-1 sudo[141016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:53 compute-1 python3.9[141018]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:53 compute-1 sudo[141016]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:53.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:53 compute-1 sudo[141094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wobtzxpczbeziwualgwlsuckrannmpug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003332.779807-1914-91774902908002/AnsiballZ_file.py'
Oct 09 09:48:53 compute-1 sudo[141094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:53 compute-1 python3.9[141096]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:53 compute-1 sudo[141094]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:53 compute-1 sudo[141247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvypjifwtetwnfnliaspprdisdthkeot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003333.6967132-1950-136565177455346/AnsiballZ_stat.py'
Oct 09 09:48:53 compute-1 sudo[141247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:54 compute-1 python3.9[141249]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:54 compute-1 sudo[141247]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:54.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:54 compute-1 sudo[141325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-witdsjfcwbugqbqkxcrkowxtvyjtumnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003333.6967132-1950-136565177455346/AnsiballZ_file.py'
Oct 09 09:48:54 compute-1 sudo[141325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:54 compute-1 python3.9[141327]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:54 compute-1 sudo[141325]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:54 compute-1 sudo[141477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxwzzzqmuzxsdfmvbfmubmpiyrxsshnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003334.5493846-1986-142720482954584/AnsiballZ_systemd.py'
Oct 09 09:48:54 compute-1 sudo[141477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:54 compute-1 ceph-mon[9795]: pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:54 compute-1 python3.9[141479]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:48:55 compute-1 systemd[1]: Reloading.
Oct 09 09:48:55 compute-1 systemd-sysv-generator[141503]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:48:55 compute-1 systemd-rc-local-generator[141499]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:48:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:55.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:55 compute-1 sudo[141477]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:55 compute-1 sudo[141667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pccdpwhgmriampdzqxpfxyppeieegpjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003335.4570332-2010-215651335566376/AnsiballZ_stat.py'
Oct 09 09:48:55 compute-1 sudo[141667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:55 compute-1 python3.9[141669]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:55 compute-1 sudo[141667]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:55 compute-1 sudo[141745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olyzwjgizpwyrbszdfkildbyiikuvmio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003335.4570332-2010-215651335566376/AnsiballZ_file.py'
Oct 09 09:48:55 compute-1 sudo[141745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:56 compute-1 python3.9[141747]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:56.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:56 compute-1 sudo[141745]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:56 compute-1 sudo[141897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isgxodkyixoffaecqgswhpzzntfupfgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003336.322453-2046-44510573496242/AnsiballZ_stat.py'
Oct 09 09:48:56 compute-1 sudo[141897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:56 compute-1 python3.9[141899]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:56 compute-1 sudo[141897]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:56 compute-1 sudo[141975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llogaxivjqpjocctfunlqsvubxbwyayy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003336.322453-2046-44510573496242/AnsiballZ_file.py'
Oct 09 09:48:56 compute-1 sudo[141975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:56 compute-1 ceph-mon[9795]: pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:56 compute-1 python3.9[141977]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:57 compute-1 sudo[141975]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:48:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:57.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:48:57 compute-1 sudo[142128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihrwcirxrohjvebcmhlhygauvrinvyfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003337.200507-2082-59908736377225/AnsiballZ_systemd.py'
Oct 09 09:48:57 compute-1 sudo[142128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:57 compute-1 python3.9[142130]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:48:57 compute-1 systemd[1]: Reloading.
Oct 09 09:48:57 compute-1 systemd-rc-local-generator[142151]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:48:57 compute-1 systemd-sysv-generator[142154]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:48:57 compute-1 systemd[1]: Starting Create netns directory...
Oct 09 09:48:57 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 09:48:57 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 09:48:57 compute-1 systemd[1]: Finished Create netns directory.
Oct 09 09:48:57 compute-1 sudo[142128]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:58.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:58 compute-1 sudo[142320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otucsengudejufnqkoxogksvctqtsxbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003338.3433213-2112-147811702143158/AnsiballZ_file.py'
Oct 09 09:48:58 compute-1 sudo[142320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:58 compute-1 python3.9[142322]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:58 compute-1 sudo[142320]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:58 compute-1 ceph-mon[9795]: pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:59 compute-1 sudo[142472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwbnyfscmumbkenadipsdskhajidteaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003338.8790877-2136-68528448025150/AnsiballZ_stat.py'
Oct 09 09:48:59 compute-1 sudo[142472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:59 compute-1 python3.9[142474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:59 compute-1 sudo[142472]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:48:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:59.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:59 compute-1 sudo[142596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwoofapfjqozgkzjiamtlqfcuyizfber ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003338.8790877-2136-68528448025150/AnsiballZ_copy.py'
Oct 09 09:48:59 compute-1 sudo[142596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:59 compute-1 python3.9[142598]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003338.8790877-2136-68528448025150/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:59 compute-1 sudo[142596]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:00.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:00 compute-1 sudo[142748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwymhdbbkgiqllkvmrurpmbxobybmlfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003340.0562892-2187-29509844673117/AnsiballZ_file.py'
Oct 09 09:49:00 compute-1 sudo[142748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:00 compute-1 python3.9[142750]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:00 compute-1 sudo[142748]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:00 compute-1 sudo[142900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtyzpvwphkhypmdoquzxbowblfgepjty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003340.6062481-2211-41332818072869/AnsiballZ_stat.py'
Oct 09 09:49:00 compute-1 sudo[142900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:00 compute-1 ceph-mon[9795]: pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:00 compute-1 python3.9[142902]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:49:00 compute-1 sudo[142900]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:01 compute-1 sudo[143023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzzvbbouylpcbrgkvuvxuqutnfnsrpfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003340.6062481-2211-41332818072869/AnsiballZ_copy.py'
Oct 09 09:49:01 compute-1 sudo[143023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:49:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:01.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:49:01 compute-1 python3.9[143025]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003340.6062481-2211-41332818072869/.source.json _original_basename=.08ee7krp follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:01 compute-1 sudo[143023]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:01 compute-1 podman[143051]: 2025-10-09 09:49:01.575525104 +0000 UTC m=+0.080079012 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 09:49:01 compute-1 sudo[143193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neamusndslydlrilrkibsputdbeaoxoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003341.5812166-2256-148689695980514/AnsiballZ_file.py'
Oct 09 09:49:01 compute-1 sudo[143193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:01 compute-1 python3.9[143195]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:02 compute-1 sudo[143193]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:02.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:02 compute-1 sudo[143345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iobwxapynyfulbmblxxabuwwnhrszpbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003342.1837888-2280-96317682067921/AnsiballZ_stat.py'
Oct 09 09:49:02 compute-1 sudo[143345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:02 compute-1 sudo[143345]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:02 compute-1 sudo[143468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-higjwsbhzvzdebuamfltdjnptizqbbhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003342.1837888-2280-96317682067921/AnsiballZ_copy.py'
Oct 09 09:49:02 compute-1 sudo[143468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:02 compute-1 ceph-mon[9795]: pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:03 compute-1 sudo[143468]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:03.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:03 compute-1 sudo[143621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoubektperscccuizoovwmrlrltbrvgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003343.44194-2331-107547382347918/AnsiballZ_container_config_data.py'
Oct 09 09:49:03 compute-1 sudo[143621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:03 compute-1 python3.9[143623]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 09 09:49:03 compute-1 sudo[143621]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:49:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:04.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:49:04 compute-1 sudo[143773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dahlyohhlgofcigcbetotgahceipmqxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003344.04105-2358-233131322036456/AnsiballZ_container_config_hash.py'
Oct 09 09:49:04 compute-1 sudo[143773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:04 compute-1 python3.9[143775]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 09:49:04 compute-1 sudo[143773]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:04 compute-1 ceph-mon[9795]: pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:49:04 compute-1 sudo[143925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qquftufdzzidoueptajiatbyafcuvjac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003344.77654-2385-131888948886930/AnsiballZ_podman_container_info.py'
Oct 09 09:49:05 compute-1 sudo[143925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:05 compute-1 python3.9[143927]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 09 09:49:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:05.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:05 compute-1 sudo[143925]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:49:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:06.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:49:06 compute-1 sudo[144097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdvnhzhyxkjxqnpyefhfexkkvkidlbon ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760003346.2626646-2424-7146774581531/AnsiballZ_edpm_container_manage.py'
Oct 09 09:49:06 compute-1 sudo[144097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:06 compute-1 python3[144099]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 09:49:06 compute-1 ceph-mon[9795]: pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:07.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:08.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:08 compute-1 podman[144110]: 2025-10-09 09:49:08.487763739 +0000 UTC m=+1.716617208 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 09 09:49:08 compute-1 podman[144157]: 2025-10-09 09:49:08.595381422 +0000 UTC m=+0.030609018 container create a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 09 09:49:08 compute-1 podman[144157]: 2025-10-09 09:49:08.580227232 +0000 UTC m=+0.015454848 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 09 09:49:08 compute-1 python3[144099]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 09 09:49:08 compute-1 sudo[144097]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:08 compute-1 ceph-mon[9795]: pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:09 compute-1 sudo[144335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkbpratbmxflfuqdqzrzfxfvffbgescv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003348.8357537-2448-188709791598451/AnsiballZ_stat.py'
Oct 09 09:49:09 compute-1 sudo[144335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:09 compute-1 python3.9[144337]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:49:09 compute-1 sudo[144335]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:09.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:09 compute-1 sudo[144490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrylaoiafypsyqptvhsyetjdwnpiuuwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003349.4534783-2475-147019742764560/AnsiballZ_file.py'
Oct 09 09:49:09 compute-1 sudo[144490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:09 compute-1 python3.9[144492]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:09 compute-1 sudo[144490]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:09 compute-1 sudo[144566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjlhvwfmbbankjjyavyiwwygfcnntdbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003349.4534783-2475-147019742764560/AnsiballZ_stat.py'
Oct 09 09:49:09 compute-1 sudo[144566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:49:10.027 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:49:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:49:10.028 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:49:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:49:10.028 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:49:10 compute-1 python3.9[144568]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:49:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:10.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:10 compute-1 sudo[144566]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:10 compute-1 sudo[144569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:49:10 compute-1 sudo[144569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:49:10 compute-1 sudo[144569]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:10 compute-1 sudo[144742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxxpjkfhkbzptssulqgugnxntbhjytig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003350.2179031-2475-38899566157071/AnsiballZ_copy.py'
Oct 09 09:49:10 compute-1 sudo[144742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:10 compute-1 python3.9[144744]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003350.2179031-2475-38899566157071/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:10 compute-1 sudo[144742]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:10 compute-1 sudo[144818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqmpgwxqitdlyihmkjbepuvfkjjhodfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003350.2179031-2475-38899566157071/AnsiballZ_systemd.py'
Oct 09 09:49:10 compute-1 sudo[144818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:10 compute-1 ceph-mon[9795]: pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:11 compute-1 python3.9[144820]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:49:11 compute-1 systemd[1]: Reloading.
Oct 09 09:49:11 compute-1 systemd-rc-local-generator[144844]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:49:11 compute-1 systemd-sysv-generator[144849]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:49:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:11.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:11 compute-1 sudo[144818]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:11 compute-1 podman[144857]: 2025-10-09 09:49:11.4963958 +0000 UTC m=+0.070000621 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 09:49:11 compute-1 sudo[144947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohgcqnmzaaolsoeytzrqxcbxzzwiszfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003350.2179031-2475-38899566157071/AnsiballZ_systemd.py'
Oct 09 09:49:11 compute-1 sudo[144947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:11 compute-1 python3.9[144949]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:11 compute-1 systemd[1]: Reloading.
Oct 09 09:49:11 compute-1 systemd-rc-local-generator[144972]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:49:11 compute-1 systemd-sysv-generator[144979]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:49:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:12.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:12 compute-1 systemd[1]: Starting multipathd container...
Oct 09 09:49:12 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:49:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67319b7ba853489351ccc2ccf1b3c8312cbe2d1df35a7789e99dfd6a762a89fc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 09 09:49:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67319b7ba853489351ccc2ccf1b3c8312cbe2d1df35a7789e99dfd6a762a89fc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 09:49:12 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a.
Oct 09 09:49:12 compute-1 podman[144989]: 2025-10-09 09:49:12.25967147 +0000 UTC m=+0.079296976 container init a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 09:49:12 compute-1 multipathd[145001]: + sudo -E kolla_set_configs
Oct 09 09:49:12 compute-1 podman[144989]: 2025-10-09 09:49:12.278474005 +0000 UTC m=+0.098099501 container start a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 09:49:12 compute-1 sudo[145007]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 09 09:49:12 compute-1 sudo[145007]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 09 09:49:12 compute-1 sudo[145007]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 09:49:12 compute-1 podman[144989]: multipathd
Oct 09 09:49:12 compute-1 systemd[1]: Started multipathd container.
Oct 09 09:49:12 compute-1 sudo[144947]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:12 compute-1 multipathd[145001]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 09:49:12 compute-1 multipathd[145001]: INFO:__main__:Validating config file
Oct 09 09:49:12 compute-1 multipathd[145001]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 09:49:12 compute-1 multipathd[145001]: INFO:__main__:Writing out command to execute
Oct 09 09:49:12 compute-1 sudo[145007]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:12 compute-1 multipathd[145001]: ++ cat /run_command
Oct 09 09:49:12 compute-1 multipathd[145001]: + CMD='/usr/sbin/multipathd -d'
Oct 09 09:49:12 compute-1 multipathd[145001]: + ARGS=
Oct 09 09:49:12 compute-1 multipathd[145001]: + sudo kolla_copy_cacerts
Oct 09 09:49:12 compute-1 podman[145008]: 2025-10-09 09:49:12.337200337 +0000 UTC m=+0.050580962 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 09 09:49:12 compute-1 sudo[145028]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 09 09:49:12 compute-1 systemd[1]: a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a-546f817f7ddc15e5.service: Main process exited, code=exited, status=1/FAILURE
Oct 09 09:49:12 compute-1 systemd[1]: a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a-546f817f7ddc15e5.service: Failed with result 'exit-code'.
Oct 09 09:49:12 compute-1 sudo[145028]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 09 09:49:12 compute-1 sudo[145028]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 09:49:12 compute-1 sudo[145028]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:12 compute-1 multipathd[145001]: + [[ ! -n '' ]]
Oct 09 09:49:12 compute-1 multipathd[145001]: + . kolla_extend_start
Oct 09 09:49:12 compute-1 multipathd[145001]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 09 09:49:12 compute-1 multipathd[145001]: Running command: '/usr/sbin/multipathd -d'
Oct 09 09:49:12 compute-1 multipathd[145001]: + umask 0022
Oct 09 09:49:12 compute-1 multipathd[145001]: + exec /usr/sbin/multipathd -d
Oct 09 09:49:12 compute-1 multipathd[145001]: 1035.969048 | --------start up--------
Oct 09 09:49:12 compute-1 multipathd[145001]: 1035.969204 | read /etc/multipath.conf
Oct 09 09:49:12 compute-1 multipathd[145001]: 1035.973615 | path checkers start up
Oct 09 09:49:12 compute-1 python3.9[145187]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:49:12 compute-1 ceph-mon[9795]: pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:13 compute-1 sudo[145339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjirgpyqfidzarixmlikcnwvyghcqnux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003352.958051-2583-60724417429615/AnsiballZ_command.py'
Oct 09 09:49:13 compute-1 sudo[145339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:49:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:13.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:49:13 compute-1 python3.9[145341]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:13 compute-1 sudo[145339]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:13 compute-1 sudo[145502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omrzsjdqttdhmmbfmqpowgkgytazkolg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003353.5206242-2607-177789874284164/AnsiballZ_systemd.py'
Oct 09 09:49:13 compute-1 sudo[145502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:13 compute-1 python3.9[145504]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:49:14 compute-1 systemd[1]: Stopping multipathd container...
Oct 09 09:49:14 compute-1 multipathd[145001]: 1037.649719 | exit (signal)
Oct 09 09:49:14 compute-1 multipathd[145001]: 1037.649750 | --------shut down-------
Oct 09 09:49:14 compute-1 systemd[1]: libpod-a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a.scope: Deactivated successfully.
Oct 09 09:49:14 compute-1 podman[145508]: 2025-10-09 09:49:14.074279721 +0000 UTC m=+0.058502051 container died a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:49:14 compute-1 systemd[1]: a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a-546f817f7ddc15e5.timer: Deactivated successfully.
Oct 09 09:49:14 compute-1 systemd[1]: Stopped /usr/bin/podman healthcheck run a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a.
Oct 09 09:49:14 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a-userdata-shm.mount: Deactivated successfully.
Oct 09 09:49:14 compute-1 systemd[1]: var-lib-containers-storage-overlay-67319b7ba853489351ccc2ccf1b3c8312cbe2d1df35a7789e99dfd6a762a89fc-merged.mount: Deactivated successfully.
Oct 09 09:49:14 compute-1 podman[145508]: 2025-10-09 09:49:14.137622957 +0000 UTC m=+0.121845277 container cleanup a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:49:14 compute-1 podman[145508]: multipathd
Oct 09 09:49:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:14.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:14 compute-1 podman[145539]: multipathd
Oct 09 09:49:14 compute-1 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 09 09:49:14 compute-1 systemd[1]: Stopped multipathd container.
Oct 09 09:49:14 compute-1 systemd[1]: Starting multipathd container...
Oct 09 09:49:14 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:49:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67319b7ba853489351ccc2ccf1b3c8312cbe2d1df35a7789e99dfd6a762a89fc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 09 09:49:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67319b7ba853489351ccc2ccf1b3c8312cbe2d1df35a7789e99dfd6a762a89fc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 09:49:14 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a.
Oct 09 09:49:14 compute-1 podman[145548]: 2025-10-09 09:49:14.285632501 +0000 UTC m=+0.078798515 container init a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd)
Oct 09 09:49:14 compute-1 multipathd[145560]: + sudo -E kolla_set_configs
Oct 09 09:49:14 compute-1 sudo[145566]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 09 09:49:14 compute-1 podman[145548]: 2025-10-09 09:49:14.305354924 +0000 UTC m=+0.098520918 container start a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 09 09:49:14 compute-1 podman[145548]: multipathd
Oct 09 09:49:14 compute-1 sudo[145566]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 09 09:49:14 compute-1 sudo[145566]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 09:49:14 compute-1 systemd[1]: Started multipathd container.
Oct 09 09:49:14 compute-1 sudo[145502]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:14 compute-1 multipathd[145560]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 09:49:14 compute-1 multipathd[145560]: INFO:__main__:Validating config file
Oct 09 09:49:14 compute-1 multipathd[145560]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 09:49:14 compute-1 multipathd[145560]: INFO:__main__:Writing out command to execute
Oct 09 09:49:14 compute-1 sudo[145566]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:14 compute-1 multipathd[145560]: ++ cat /run_command
Oct 09 09:49:14 compute-1 multipathd[145560]: + CMD='/usr/sbin/multipathd -d'
Oct 09 09:49:14 compute-1 multipathd[145560]: + ARGS=
Oct 09 09:49:14 compute-1 multipathd[145560]: + sudo kolla_copy_cacerts
Oct 09 09:49:14 compute-1 podman[145567]: 2025-10-09 09:49:14.360995181 +0000 UTC m=+0.046706398 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 09 09:49:14 compute-1 systemd[1]: a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a-2767a13f34e46aee.service: Main process exited, code=exited, status=1/FAILURE
Oct 09 09:49:14 compute-1 systemd[1]: a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a-2767a13f34e46aee.service: Failed with result 'exit-code'.
Oct 09 09:49:14 compute-1 sudo[145588]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 09 09:49:14 compute-1 sudo[145588]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 09 09:49:14 compute-1 sudo[145588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 09:49:14 compute-1 sudo[145588]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:14 compute-1 multipathd[145560]: Running command: '/usr/sbin/multipathd -d'
Oct 09 09:49:14 compute-1 multipathd[145560]: + [[ ! -n '' ]]
Oct 09 09:49:14 compute-1 multipathd[145560]: + . kolla_extend_start
Oct 09 09:49:14 compute-1 multipathd[145560]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 09 09:49:14 compute-1 multipathd[145560]: + umask 0022
Oct 09 09:49:14 compute-1 multipathd[145560]: + exec /usr/sbin/multipathd -d
Oct 09 09:49:14 compute-1 multipathd[145560]: 1038.000426 | --------start up--------
Oct 09 09:49:14 compute-1 multipathd[145560]: 1038.000440 | read /etc/multipath.conf
Oct 09 09:49:14 compute-1 multipathd[145560]: 1038.004688 | path checkers start up
Oct 09 09:49:14 compute-1 sudo[145746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwbciokzkiapoyhipdjdotixhroekqzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003354.5621507-2631-157565536671773/AnsiballZ_file.py'
Oct 09 09:49:14 compute-1 sudo[145746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:14 compute-1 python3.9[145748]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:14 compute-1 sudo[145746]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:14 compute-1 ceph-mon[9795]: pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:15.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:15 compute-1 sudo[145899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbllbphwfrqirjzmthjlmbmhvoldtpst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003355.4443548-2667-65502911918580/AnsiballZ_file.py'
Oct 09 09:49:15 compute-1 sudo[145899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:15 compute-1 python3.9[145901]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 09 09:49:15 compute-1 sudo[145899]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:49:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:16.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:49:16 compute-1 sudo[146051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vstbxattnmyzoolrjbgcfjknuucdjfmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003356.0248392-2691-210657192613134/AnsiballZ_modprobe.py'
Oct 09 09:49:16 compute-1 sudo[146051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:16 compute-1 python3.9[146053]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 09 09:49:16 compute-1 kernel: Key type psk registered
Oct 09 09:49:16 compute-1 sudo[146051]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:16 compute-1 sudo[146213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqrikwqyijbmwbieqzgvqnwscmaisvms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003356.6353297-2715-104742063536187/AnsiballZ_stat.py'
Oct 09 09:49:16 compute-1 sudo[146213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:17 compute-1 python3.9[146215]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:49:17 compute-1 ceph-mon[9795]: pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:17 compute-1 sudo[146213]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:17 compute-1 sudo[146336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypaexwntxvekbxzltrtqanvvgihvsbji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003356.6353297-2715-104742063536187/AnsiballZ_copy.py'
Oct 09 09:49:17 compute-1 sudo[146336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:17.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:17 compute-1 python3.9[146338]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003356.6353297-2715-104742063536187/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:17 compute-1 sudo[146336]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:17 compute-1 sudo[146489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfpnjrvrxhncahouiktjmcjjkrlcvvug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003357.7056148-2763-241424125834582/AnsiballZ_lineinfile.py'
Oct 09 09:49:17 compute-1 sudo[146489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:18 compute-1 python3.9[146491]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:18 compute-1 sudo[146489]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:18.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:18 compute-1 sudo[146641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfqlgxobfvwwgoxctiwlftstcmzfyxgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003358.2272666-2787-55700419012503/AnsiballZ_systemd.py'
Oct 09 09:49:18 compute-1 sudo[146641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:18 compute-1 python3.9[146643]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:49:18 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 09 09:49:18 compute-1 systemd[1]: Stopped Load Kernel Modules.
Oct 09 09:49:18 compute-1 systemd[1]: Stopping Load Kernel Modules...
Oct 09 09:49:18 compute-1 systemd[1]: Starting Load Kernel Modules...
Oct 09 09:49:18 compute-1 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 09 09:49:18 compute-1 systemd[1]: Finished Load Kernel Modules.
Oct 09 09:49:18 compute-1 sudo[146641]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:19 compute-1 ceph-mon[9795]: pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:19 compute-1 sudo[146798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvneprzxfsoymznfobnijlueumpburhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003358.9679537-2811-75675550052966/AnsiballZ_setup.py'
Oct 09 09:49:19 compute-1 sudo[146798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:19.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:19 compute-1 python3.9[146800]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:49:19 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 09 09:49:19 compute-1 sudo[146798]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:49:20 compute-1 sudo[146884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntqxfhubojcrpcqsfndijpsumlrozkjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003358.9679537-2811-75675550052966/AnsiballZ_dnf.py'
Oct 09 09:49:20 compute-1 sudo[146884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:20.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:20 compute-1 python3.9[146886]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:49:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:21 compute-1 ceph-mon[9795]: pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:21.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:22.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:22 compute-1 podman[146889]: 2025-10-09 09:49:22.56333579 +0000 UTC m=+0.073954324 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 09 09:49:23 compute-1 ceph-mon[9795]: pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:49:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:23.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:49:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:24.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:25 compute-1 ceph-mon[9795]: pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:25.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:26.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:26 compute-1 systemd[1]: Reloading.
Oct 09 09:49:26 compute-1 systemd-sysv-generator[146941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:49:26 compute-1 systemd-rc-local-generator[146936]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:49:26 compute-1 systemd[1]: Reloading.
Oct 09 09:49:26 compute-1 systemd-sysv-generator[146977]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:49:26 compute-1 systemd-rc-local-generator[146973]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:49:26 compute-1 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 09 09:49:26 compute-1 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 09 09:49:26 compute-1 lvm[147021]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:49:26 compute-1 lvm[147021]: VG ceph_vg0 finished
Oct 09 09:49:26 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 09 09:49:27 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 09 09:49:27 compute-1 systemd[1]: Reloading.
Oct 09 09:49:27 compute-1 ceph-mon[9795]: pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:27 compute-1 systemd-rc-local-generator[147066]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:49:27 compute-1 systemd-sysv-generator[147069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:49:27 compute-1 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 09 09:49:27 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 09 09:49:27 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 09 09:49:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:27.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:27 compute-1 sudo[146884]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:28 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 09 09:49:28 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 09 09:49:28 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.151s CPU time.
Oct 09 09:49:28 compute-1 systemd[1]: run-red74016144ef484ca6c5febbe69245e8.service: Deactivated successfully.
Oct 09 09:49:28 compute-1 sudo[148362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uztctbpxglirkqxrgvupmsiiukgjafhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003367.832969-2847-105012769987215/AnsiballZ_file.py'
Oct 09 09:49:28 compute-1 sudo[148362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:28.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:28 compute-1 python3.9[148364]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:28 compute-1 sudo[148362]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:28 compute-1 python3.9[148514]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:49:29 compute-1 ceph-mon[9795]: pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:29.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:29 compute-1 sudo[148669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sixhvsyauwbcabwobxnxahbgkpiobxwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003369.2963574-2899-210900168574024/AnsiballZ_file.py'
Oct 09 09:49:29 compute-1 sudo[148669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:29 compute-1 python3.9[148671]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:29 compute-1 sudo[148669]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:30.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:30 compute-1 sudo[148748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:49:30 compute-1 sudo[148748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:49:30 compute-1 sudo[148748]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:30 compute-1 sudo[148846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjbzrrqzgwlumldtdijuqwpojwyoosxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003370.0880883-2932-79749311687551/AnsiballZ_systemd_service.py'
Oct 09 09:49:30 compute-1 sudo[148846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:30 compute-1 python3.9[148848]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:49:30 compute-1 systemd[1]: Reloading.
Oct 09 09:49:30 compute-1 systemd-rc-local-generator[148869]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:49:30 compute-1 systemd-sysv-generator[148874]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:49:31 compute-1 ceph-mon[9795]: pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:31 compute-1 sudo[148846]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:31.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:31 compute-1 python3.9[149033]: ansible-ansible.builtin.service_facts Invoked
Oct 09 09:49:31 compute-1 network[149050]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 09:49:31 compute-1 network[149051]: 'network-scripts' will be removed from distribution in near future.
Oct 09 09:49:31 compute-1 network[149052]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 09:49:31 compute-1 podman[149057]: 2025-10-09 09:49:31.778209844 +0000 UTC m=+0.043544095 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 09 09:49:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:32.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:33 compute-1 sudo[149131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:49:33 compute-1 sudo[149131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:49:33 compute-1 sudo[149131]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:33 compute-1 ceph-mon[9795]: pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:33 compute-1 sudo[149159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:49:33 compute-1 sudo[149159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:49:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:33.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:33 compute-1 sudo[149159]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:49:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:49:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:49:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:49:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:49:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:49:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:49:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:34.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:35 compute-1 ceph-mon[9795]: pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:35 compute-1 ceph-mon[9795]: pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:49:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:49:35 compute-1 sudo[149425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amkekaenisfdialrqxpxoepwfenuplzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003375.1213334-2989-173856388031334/AnsiballZ_systemd_service.py'
Oct 09 09:49:35 compute-1 sudo[149425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:35.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:35 compute-1 python3.9[149427]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:35 compute-1 sudo[149425]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:35 compute-1 sudo[149579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgsnsfcecfoudhcnkkohetlsfcyhxuim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003375.690747-2989-25572495971184/AnsiballZ_systemd_service.py'
Oct 09 09:49:35 compute-1 sudo[149579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:36 compute-1 python3.9[149581]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:36 compute-1 sudo[149579]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:36.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:36 compute-1 sudo[149732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcxwqhzwukyknpzjyymhbkhjijryjcfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003376.2549372-2989-59019123346158/AnsiballZ_systemd_service.py'
Oct 09 09:49:36 compute-1 sudo[149732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:36 compute-1 python3.9[149734]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:36 compute-1 sudo[149732]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:37 compute-1 sudo[149885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prqxfzqlxhtzlczuqkqnfxbglcgijgxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003376.828506-2989-142620647243864/AnsiballZ_systemd_service.py'
Oct 09 09:49:37 compute-1 sudo[149885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:37 compute-1 ceph-mon[9795]: pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:49:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:49:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:49:37 compute-1 sudo[149888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:49:37 compute-1 sudo[149888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:49:37 compute-1 sudo[149888]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:37 compute-1 python3.9[149887]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:37 compute-1 sudo[149885]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:37.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:37 compute-1 sudo[150064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewaudmutvgoxoybxzqrhwwxbrjgxtgnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003377.5310142-2989-199496358435514/AnsiballZ_systemd_service.py'
Oct 09 09:49:37 compute-1 sudo[150064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:38 compute-1 python3.9[150066]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:38 compute-1 sudo[150064]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:38.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:38 compute-1 sudo[150217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmgsrogpbwgqdgbxirqbainpgimjqgfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003378.186847-2989-142431187080850/AnsiballZ_systemd_service.py'
Oct 09 09:49:38 compute-1 sudo[150217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:38 compute-1 python3.9[150219]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:38 compute-1 sudo[150217]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:38 compute-1 sudo[150370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zahbovskoxdizpoooxrqqvofodzduwue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003378.7594588-2989-95297669817698/AnsiballZ_systemd_service.py'
Oct 09 09:49:38 compute-1 sudo[150370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:39 compute-1 ceph-mon[9795]: pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:49:39 compute-1 python3.9[150372]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:39 compute-1 sudo[150370]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:39.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:39 compute-1 sudo[150524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffzsuqtwarnbqcszigfsmishwfzjhjps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003379.3616314-2989-9279388350477/AnsiballZ_systemd_service.py'
Oct 09 09:49:39 compute-1 sudo[150524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:39 compute-1 python3.9[150526]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:39 compute-1 sudo[150524]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:40.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:40 compute-1 sudo[150677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzxvskkgwwyfkkwnqwgpzewzkcgazdjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003380.5530825-3166-90367935359652/AnsiballZ_file.py'
Oct 09 09:49:40 compute-1 sudo[150677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:40 compute-1 python3.9[150679]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:40 compute-1 sudo[150677]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:41 compute-1 ceph-mon[9795]: pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:49:41 compute-1 sudo[150829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adimpehbhopghcwmpvwgxluleagtufei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003381.0036392-3166-207260430770643/AnsiballZ_file.py'
Oct 09 09:49:41 compute-1 sudo[150829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:41.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:41 compute-1 python3.9[150831]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:41 compute-1 sudo[150829]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:41 compute-1 sudo[150991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igtmiuxuretmjlwptpmtijrtbbcchuav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003381.4395034-3166-272347872903877/AnsiballZ_file.py'
Oct 09 09:49:41 compute-1 sudo[150991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:41 compute-1 podman[150956]: 2025-10-09 09:49:41.636512978 +0000 UTC m=+0.040702776 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 09 09:49:41 compute-1 python3.9[151001]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:41 compute-1 sudo[150991]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:42 compute-1 sudo[151151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbmfizejxdcgrqjwqpqmzzlkyfywvcdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003381.8873866-3166-32140615327663/AnsiballZ_file.py'
Oct 09 09:49:42 compute-1 sudo[151151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:42.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:42 compute-1 python3.9[151153]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:42 compute-1 sudo[151151]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:42 compute-1 sudo[151303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdklesziycggzydwhkjlmxksdywynked ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003382.3450584-3166-133844733131110/AnsiballZ_file.py'
Oct 09 09:49:42 compute-1 sudo[151303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:42 compute-1 python3.9[151305]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:42 compute-1 sudo[151303]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:42 compute-1 sudo[151455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okhzjlvtopyqfwecmrxutwrijzbdkzud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003382.7849305-3166-196205791494450/AnsiballZ_file.py'
Oct 09 09:49:42 compute-1 sudo[151455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:43 compute-1 python3.9[151457]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:43 compute-1 ceph-mon[9795]: pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:49:43 compute-1 sudo[151455]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:43.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:43 compute-1 sudo[151608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdmhgqezddumkfkwcabqkroxufgzhrhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003383.2251565-3166-182097379738053/AnsiballZ_file.py'
Oct 09 09:49:43 compute-1 sudo[151608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:43 compute-1 python3.9[151610]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:43 compute-1 sudo[151608]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:43 compute-1 sudo[151760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsbytjgxlyotlgzgawgsiwofravquohz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003383.6584053-3166-121911078112840/AnsiballZ_file.py'
Oct 09 09:49:43 compute-1 sudo[151760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:44 compute-1 python3.9[151762]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:44 compute-1 sudo[151760]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:44.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:44 compute-1 podman[151867]: 2025-10-09 09:49:44.527368317 +0000 UTC m=+0.042093970 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 09 09:49:44 compute-1 sudo[151930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyggqcffelxbzkieggcqcjxiqihfddkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003384.3631866-3337-73941270033080/AnsiballZ_file.py'
Oct 09 09:49:44 compute-1 sudo[151930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:44 compute-1 python3.9[151932]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:44 compute-1 sudo[151930]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:44 compute-1 sudo[152082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwrrvxbgzsainfznsqraghvukdyqmuoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003384.7975097-3337-92201240949850/AnsiballZ_file.py'
Oct 09 09:49:44 compute-1 sudo[152082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:45 compute-1 python3.9[152084]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:45 compute-1 sudo[152082]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:45 compute-1 ceph-mon[9795]: pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:49:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:45.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:45 compute-1 sudo[152235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrfwqenhutxtsbuqaxvxuhwmormiigbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003385.2282248-3337-275312465764476/AnsiballZ_file.py'
Oct 09 09:49:45 compute-1 sudo[152235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:45 compute-1 python3.9[152237]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:45 compute-1 sudo[152235]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:45 compute-1 sudo[152387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-menxeisvmvgdwprvmiebrtysgkhihvxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003385.6775315-3337-138681623946624/AnsiballZ_file.py'
Oct 09 09:49:45 compute-1 sudo[152387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:46 compute-1 python3.9[152389]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:46 compute-1 sudo[152387]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:46.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:46 compute-1 sudo[152539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvdliekuamcfkvrltcyewvtrkcfcwarq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003386.1227002-3337-155925124177584/AnsiballZ_file.py'
Oct 09 09:49:46 compute-1 sudo[152539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:46 compute-1 python3.9[152541]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:46 compute-1 sudo[152539]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:46 compute-1 sudo[152691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdgebcbsydrbkklrwzqxiwawdexeivdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003386.5596323-3337-38488356152772/AnsiballZ_file.py'
Oct 09 09:49:46 compute-1 sudo[152691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:46 compute-1 python3.9[152693]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:46 compute-1 sudo[152691]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:47 compute-1 ceph-mon[9795]: pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:47 compute-1 sudo[152843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sselxunnrrhwrkoihlogidayniklbvbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003386.997389-3337-179726071281676/AnsiballZ_file.py'
Oct 09 09:49:47 compute-1 sudo[152843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:47.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:47 compute-1 python3.9[152845]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:47 compute-1 sudo[152843]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:47 compute-1 sudo[152996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebgufqtzecnjujeqynkvghtpcidvrnzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003387.4455812-3337-217766618371061/AnsiballZ_file.py'
Oct 09 09:49:47 compute-1 sudo[152996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:47 compute-1 python3.9[152998]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:47 compute-1 sudo[152996]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:48.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:48 compute-1 sudo[153148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqnnuspdbmmubhxgeiglokvwkocwmuig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003388.1562405-3512-141568952895392/AnsiballZ_command.py'
Oct 09 09:49:48 compute-1 sudo[153148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:48 compute-1 python3.9[153150]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:48 compute-1 sudo[153148]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:49 compute-1 ceph-mon[9795]: pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:49 compute-1 python3.9[153302]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 09:49:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:49.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:49 compute-1 sudo[153453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfotqfidmnmdhuqbnajqcvjwzafwzbor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003389.6897218-3565-10732353998156/AnsiballZ_systemd_service.py'
Oct 09 09:49:49 compute-1 sudo[153453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:50 compute-1 python3.9[153455]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:49:50 compute-1 systemd[1]: Reloading.
Oct 09 09:49:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:49:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:50.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:50 compute-1 systemd-rc-local-generator[153478]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:49:50 compute-1 systemd-sysv-generator[153482]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:49:50 compute-1 sudo[153490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:49:50 compute-1 sudo[153490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:49:50 compute-1 sudo[153490]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:50 compute-1 sudo[153453]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:50 compute-1 sudo[153664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpjjjglynicgmuisnaivvwqumbipguqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003390.5555758-3589-180425625494729/AnsiballZ_command.py'
Oct 09 09:49:50 compute-1 sudo[153664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:50 compute-1 python3.9[153666]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:50 compute-1 sudo[153664]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:51 compute-1 ceph-mon[9795]: pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:51 compute-1 sudo[153817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wivricixdoeftysiiibjjznrndfmczji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003391.0498273-3589-269862188283348/AnsiballZ_command.py'
Oct 09 09:49:51 compute-1 sudo[153817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:51.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:51 compute-1 python3.9[153819]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:51 compute-1 sudo[153817]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:51 compute-1 sudo[153971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkkgdaqcecogmvtcoycsngmvxuoovnkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003391.494334-3589-244782672908683/AnsiballZ_command.py'
Oct 09 09:49:51 compute-1 sudo[153971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:51 compute-1 python3.9[153973]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:51 compute-1 sudo[153971]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:52 compute-1 sudo[154124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulorllitimhjxewwvxofuknjwifwustt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003391.9337082-3589-119569052099982/AnsiballZ_command.py'
Oct 09 09:49:52 compute-1 sudo[154124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:52.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:52 compute-1 python3.9[154126]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:52 compute-1 sudo[154124]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:52 compute-1 sudo[154277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hajyeujdtsmctskudomxrhhdmitpzyzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003392.37755-3589-48863937613155/AnsiballZ_command.py'
Oct 09 09:49:52 compute-1 sudo[154277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:52 compute-1 python3.9[154279]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:52 compute-1 sudo[154277]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:52 compute-1 podman[154281]: 2025-10-09 09:49:52.801803654 +0000 UTC m=+0.059580056 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 09:49:53 compute-1 sudo[154453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsopxldbavpyvscbrnmomdqilnsawhbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003392.8530285-3589-80764455693712/AnsiballZ_command.py'
Oct 09 09:49:53 compute-1 sudo[154453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:53 compute-1 ceph-mon[9795]: pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:49:53 compute-1 python3.9[154455]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:53 compute-1 sudo[154453]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:53.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:53 compute-1 sudo[154607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siodftnfeolhcemwowkgesvnpntiwsou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003393.5253503-3589-210421076864944/AnsiballZ_command.py'
Oct 09 09:49:53 compute-1 sudo[154607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:53 compute-1 python3.9[154609]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:53 compute-1 sudo[154607]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:54 compute-1 sudo[154760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daanlulmfzhxdpzceozlhdxvblxrjnoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003394.020457-3589-226915017960717/AnsiballZ_command.py'
Oct 09 09:49:54 compute-1 sudo[154760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:54.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:54 compute-1 python3.9[154762]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:54 compute-1 sudo[154760]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:55 compute-1 ceph-mon[9795]: pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:49:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:49:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:55.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:49:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:55 compute-1 sudo[154914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioztzbqtrwxchtevnzzshvezewmguotm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003395.491006-3796-103020091331451/AnsiballZ_file.py'
Oct 09 09:49:55 compute-1 sudo[154914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:55 compute-1 python3.9[154916]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:55 compute-1 sudo[154914]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:56 compute-1 sudo[155066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzfkjeuroarmdgrexwgxopynqnkkldws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003395.9910219-3796-70501612103562/AnsiballZ_file.py'
Oct 09 09:49:56 compute-1 sudo[155066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:56.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:56 compute-1 python3.9[155068]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:56 compute-1 sudo[155066]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:56 compute-1 sudo[155218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vssgcedfehscivgvlozmermpzbzhxkmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003396.5120366-3796-98698546314156/AnsiballZ_file.py'
Oct 09 09:49:56 compute-1 sudo[155218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:56 compute-1 python3.9[155220]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:56 compute-1 sudo[155218]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:57 compute-1 ceph-mon[9795]: pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:49:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:57.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:57 compute-1 sudo[155371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnyhuczhemaxpsvvriaeidwlocodnoed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003397.1849754-3862-151211968243487/AnsiballZ_file.py'
Oct 09 09:49:57 compute-1 sudo[155371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:57 compute-1 python3.9[155373]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:57 compute-1 sudo[155371]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:57 compute-1 sudo[155523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxfinsbcilydlwiceknalxehgjhnvmcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003397.6593826-3862-23767203204349/AnsiballZ_file.py'
Oct 09 09:49:57 compute-1 sudo[155523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:57 compute-1 python3.9[155525]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:58 compute-1 sudo[155523]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:49:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:58.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:49:58 compute-1 ceph-mon[9795]: pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:49:58 compute-1 sudo[155675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wurjfyttjyzsgdlwuyjksujgovwauncj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003398.1231327-3862-211510371451817/AnsiballZ_file.py'
Oct 09 09:49:58 compute-1 sudo[155675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:58 compute-1 python3.9[155677]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:58 compute-1 sudo[155675]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:58 compute-1 sudo[155827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fypnhrjbxztiqzdhqbpetlrtoqjrltht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003398.5924168-3862-221676504811772/AnsiballZ_file.py'
Oct 09 09:49:58 compute-1 sudo[155827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:58 compute-1 python3.9[155829]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:58 compute-1 sudo[155827]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:59 compute-1 sudo[155979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foajmclbdjgssffwygcvucpcujygwdfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003399.0503802-3862-141785813244952/AnsiballZ_file.py'
Oct 09 09:49:59 compute-1 sudo[155979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:49:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:59.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:59 compute-1 python3.9[155981]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:59 compute-1 sudo[155979]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:59 compute-1 sudo[156132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gutalerbnoantqlxzhxkfkxjtemjsaud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003399.5001707-3862-86693763633964/AnsiballZ_file.py'
Oct 09 09:49:59 compute-1 sudo[156132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:59 compute-1 python3.9[156134]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:59 compute-1 sudo[156132]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:00 compute-1 systemd[1]: Starting system activity accounting tool...
Oct 09 09:50:00 compute-1 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 09:50:00 compute-1 systemd[1]: Finished system activity accounting tool.
Oct 09 09:50:00 compute-1 sudo[156285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okxairpxzbyjhigmarydyvvcbgdfrenm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003399.9640615-3862-262838587940832/AnsiballZ_file.py'
Oct 09 09:50:00 compute-1 sudo[156285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:50:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:00.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:50:00 compute-1 python3.9[156287]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:00 compute-1 sudo[156285]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:00 compute-1 ceph-mon[9795]: pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:00 compute-1 ceph-mon[9795]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct 09 09:50:00 compute-1 ceph-mon[9795]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct 09 09:50:00 compute-1 ceph-mon[9795]:     daemon nfs.cephfs.0.0.compute-1.douegr on compute-1 is in error state
Oct 09 09:50:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:00 compute-1 sudo[156437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocisknnyidzdfepkflgvzqfitwnlzmtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003400.5893087-3862-50629077407597/AnsiballZ_file.py'
Oct 09 09:50:00 compute-1 sudo[156437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:00 compute-1 python3.9[156439]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:00 compute-1 sudo[156437]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:01 compute-1 sudo[156589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mowavjusxyhrcxeualauayzgtjchtxnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003401.0376804-3862-78268120651750/AnsiballZ_file.py'
Oct 09 09:50:01 compute-1 sudo[156589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:01.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:01 compute-1 python3.9[156591]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:01 compute-1 sudo[156589]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:02.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:02 compute-1 podman[156617]: 2025-10-09 09:50:02.528521808 +0000 UTC m=+0.039483285 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 09 09:50:02 compute-1 ceph-mon[9795]: pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:50:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:50:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:03.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:50:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:04.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:04 compute-1 ceph-mon[9795]: pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:50:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:05.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:06.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:06 compute-1 sudo[156761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzlbmkbyasazlneqwwhbihuntocumjai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003405.9227614-4229-269172751728010/AnsiballZ_getent.py'
Oct 09 09:50:06 compute-1 sudo[156761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:06 compute-1 python3.9[156763]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 09 09:50:06 compute-1 sudo[156761]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:06 compute-1 ceph-mon[9795]: pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:06 compute-1 sudo[156914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npklefevnhktjexeyihhlzilzfsuvlsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003406.59629-4253-80136895889154/AnsiballZ_group.py'
Oct 09 09:50:06 compute-1 sudo[156914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:07 compute-1 python3.9[156916]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 09 09:50:07 compute-1 groupadd[156917]: group added to /etc/group: name=nova, GID=42436
Oct 09 09:50:07 compute-1 groupadd[156917]: group added to /etc/gshadow: name=nova
Oct 09 09:50:07 compute-1 groupadd[156917]: new group: name=nova, GID=42436
Oct 09 09:50:07 compute-1 sudo[156914]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:07.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:07 compute-1 sudo[157073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkdjccudhmzyvmtcqegmopwchxpwpsrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003407.3069592-4277-48899098632661/AnsiballZ_user.py'
Oct 09 09:50:07 compute-1 sudo[157073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:07 compute-1 python3.9[157075]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 09 09:50:07 compute-1 useradd[157077]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 09 09:50:07 compute-1 useradd[157077]: add 'nova' to group 'libvirt'
Oct 09 09:50:07 compute-1 useradd[157077]: add 'nova' to shadow group 'libvirt'
Oct 09 09:50:07 compute-1 sudo[157073]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:08.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:08 compute-1 sshd-session[157108]: Accepted publickey for zuul from 192.168.122.30 port 45804 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:50:08 compute-1 systemd-logind[798]: New session 39 of user zuul.
Oct 09 09:50:08 compute-1 systemd[1]: Started Session 39 of User zuul.
Oct 09 09:50:08 compute-1 sshd-session[157108]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:50:08 compute-1 ceph-mon[9795]: pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:08 compute-1 sshd-session[157111]: Received disconnect from 192.168.122.30 port 45804:11: disconnected by user
Oct 09 09:50:08 compute-1 sshd-session[157111]: Disconnected from user zuul 192.168.122.30 port 45804
Oct 09 09:50:08 compute-1 sshd-session[157108]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:50:08 compute-1 systemd[1]: session-39.scope: Deactivated successfully.
Oct 09 09:50:08 compute-1 systemd-logind[798]: Session 39 logged out. Waiting for processes to exit.
Oct 09 09:50:08 compute-1 systemd-logind[798]: Removed session 39.
Oct 09 09:50:09 compute-1 python3.9[157261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:09.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:09 compute-1 python3.9[157383]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003408.9222152-4352-275798659640646/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:50:10.028 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:50:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:50:10.028 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:50:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:50:10.028 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:50:10 compute-1 python3.9[157533]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:50:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:10.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:50:10 compute-1 python3.9[157609]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:10 compute-1 sudo[157610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:50:10 compute-1 sudo[157610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:50:10 compute-1 sudo[157610]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:10 compute-1 ceph-mon[9795]: pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:10 compute-1 python3.9[157784]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:11 compute-1 python3.9[157905]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003410.5647507-4352-264178552898551/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:11.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:11 compute-1 python3.9[158056]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:11 compute-1 podman[158151]: 2025-10-09 09:50:11.972290869 +0000 UTC m=+0.041069757 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 09 09:50:12 compute-1 python3.9[158187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003411.3843381-4352-100882208068077/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:12.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:12 compute-1 python3.9[158343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:12 compute-1 ceph-mon[9795]: pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:12 compute-1 python3.9[158464]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003412.2139869-4352-159821116365442/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:13.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:13 compute-1 sudo[158615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbxkddehweiszzticzluoqiwtkwxgexm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003413.2102132-4559-92391803385477/AnsiballZ_file.py'
Oct 09 09:50:13 compute-1 sudo[158615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:13 compute-1 python3.9[158617]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:50:13 compute-1 sudo[158615]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:13 compute-1 sudo[158767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxsxmwmeayqaaksfiyqxkeijqcrbdgle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003413.7558937-4583-14034682885834/AnsiballZ_copy.py'
Oct 09 09:50:13 compute-1 sudo[158767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:14 compute-1 python3.9[158769]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:50:14 compute-1 sudo[158767]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:14.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:14 compute-1 sudo[158919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oghvgcyzyhuqwymavubvpvxedyshiwrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003414.282813-4607-82540879427534/AnsiballZ_stat.py'
Oct 09 09:50:14 compute-1 sudo[158919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:14 compute-1 python3.9[158921]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:50:14 compute-1 sudo[158919]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:14 compute-1 ceph-mon[9795]: pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:15 compute-1 sudo[159084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arzxqrccjmqbninidfckhojeahmctxcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003414.8123224-4631-181590184628926/AnsiballZ_stat.py'
Oct 09 09:50:15 compute-1 sudo[159084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:15 compute-1 podman[159045]: 2025-10-09 09:50:15.0340596 +0000 UTC m=+0.041531589 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:50:15 compute-1 python3.9[159091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:15 compute-1 sudo[159084]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:15.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:15 compute-1 sudo[159213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncgzhqdlirbbgalmzjyumtrayltmhwsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003414.8123224-4631-181590184628926/AnsiballZ_copy.py'
Oct 09 09:50:15 compute-1 sudo[159213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:15 compute-1 python3.9[159215]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760003414.8123224-4631-181590184628926/.source _original_basename=.zbeun2z3 follow=False checksum=eed7f96a5dd772c84aeba4a6fa2dfdaaf1ba521a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 09 09:50:15 compute-1 sudo[159213]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:16.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:16 compute-1 python3.9[159367]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:50:16 compute-1 ceph-mon[9795]: pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:16 compute-1 python3.9[159519]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:17 compute-1 python3.9[159640]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003416.4754796-4709-16999923425064/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=837ffd9c004e5987a2e117698c56827ebbfeb5b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:17.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:17 compute-1 python3.9[159791]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:18 compute-1 python3.9[159912]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003417.3753889-4754-264564526875956/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=722ab36345f3375cbdcf911ce8f6e1a8083d7e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:18.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:18 compute-1 sudo[160062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quxeiebeqosiamghdgzrpzfjdusghfwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003418.4838028-4805-18266324275153/AnsiballZ_container_config_data.py'
Oct 09 09:50:18 compute-1 sudo[160062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:18 compute-1 ceph-mon[9795]: pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:18 compute-1 python3.9[160064]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 09 09:50:18 compute-1 sudo[160062]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:19 compute-1 sudo[160214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgpjqnjuyjhpuntqjrhlihmmayhxhrcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003419.086662-4832-89611760665797/AnsiballZ_container_config_hash.py'
Oct 09 09:50:19 compute-1 sudo[160214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:50:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:19.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:50:19 compute-1 python3.9[160216]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 09:50:19 compute-1 sudo[160214]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:50:19 compute-1 sudo[160367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owoeogejwobqjnkooggwgceaxxjipexg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760003419.8133202-4862-82758392100729/AnsiballZ_edpm_container_manage.py'
Oct 09 09:50:19 compute-1 sudo[160367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:20 compute-1 python3[160369]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 09:50:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:50:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:20.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:50:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:20 compute-1 ceph-mon[9795]: pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:21.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:50:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:22.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:50:22 compute-1 ceph-mon[9795]: pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:23.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:23 compute-1 podman[160403]: 2025-10-09 09:50:23.556549842 +0000 UTC m=+0.065186633 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct 09 09:50:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:24.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:24 compute-1 ceph-mon[9795]: pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:25.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:26.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:26 compute-1 ceph-mon[9795]: pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:27.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:28.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:28 compute-1 ceph-mon[9795]: pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:50:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:29.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:50:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:30.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:30 compute-1 sudo[160455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:50:30 compute-1 sudo[160455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:50:30 compute-1 sudo[160455]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:30 compute-1 ceph-mon[9795]: pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:50:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:31.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:50:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:32.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:32 compute-1 ceph-mon[9795]: pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:33 compute-1 podman[160380]: 2025-10-09 09:50:33.061482611 +0000 UTC m=+12.814959599 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 09 09:50:33 compute-1 podman[160499]: 2025-10-09 09:50:33.159754373 +0000 UTC m=+0.031490389 container create 8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:50:33 compute-1 podman[160499]: 2025-10-09 09:50:33.14484948 +0000 UTC m=+0.016585515 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 09 09:50:33 compute-1 python3[160369]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 09 09:50:33 compute-1 sudo[160367]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:33.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:33 compute-1 podman[160604]: 2025-10-09 09:50:33.538480169 +0000 UTC m=+0.048617774 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:50:33 compute-1 sudo[160694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqacdfcgwsbkezrhublawlhuhtxvhppj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003433.4116302-4886-18493076254470/AnsiballZ_stat.py'
Oct 09 09:50:33 compute-1 sudo[160694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:33 compute-1 python3.9[160696]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:50:33 compute-1 sudo[160694]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:34.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:34 compute-1 sudo[160848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbanuzutlbdkytsblanycwkzchnrwwqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003434.3306222-4922-57821854147483/AnsiballZ_container_config_data.py'
Oct 09 09:50:34 compute-1 sudo[160848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:34 compute-1 python3.9[160850]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 09 09:50:34 compute-1 sudo[160848]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:34 compute-1 ceph-mon[9795]: pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:50:35 compute-1 sudo[161000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfzqxizemlecguiisydzxuvczihtaiyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003434.923747-4949-44687678006156/AnsiballZ_container_config_hash.py'
Oct 09 09:50:35 compute-1 sudo[161000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:35 compute-1 python3.9[161002]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 09:50:35 compute-1 sudo[161000]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:35.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:35 compute-1 sudo[161153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhdunklfgbqnmbehvfzjsabifetcercd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760003435.653401-4979-123930173637353/AnsiballZ_edpm_container_manage.py'
Oct 09 09:50:35 compute-1 sudo[161153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:36 compute-1 python3[161155]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 09:50:36 compute-1 podman[161183]: 2025-10-09 09:50:36.204754487 +0000 UTC m=+0.029323420 container create a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:50:36 compute-1 podman[161183]: 2025-10-09 09:50:36.191119699 +0000 UTC m=+0.015688642 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 09 09:50:36 compute-1 python3[161155]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 kolla_start
Oct 09 09:50:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:36.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:36 compute-1 sudo[161153]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:36 compute-1 sudo[161360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itqkrygnzzlvdxtqbriyegvqxbhezjkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003436.4512577-5003-175477856796497/AnsiballZ_stat.py'
Oct 09 09:50:36 compute-1 sudo[161360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:36 compute-1 python3.9[161362]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:50:36 compute-1 sudo[161360]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:36 compute-1 ceph-mon[9795]: pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:37 compute-1 sudo[161472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:50:37 compute-1 sudo[161472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:50:37 compute-1 sudo[161472]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:37 compute-1 sudo[161552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqtwpalgjrhjdnvjcgunwhakeyreupqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003437.11955-5030-259987642721295/AnsiballZ_file.py'
Oct 09 09:50:37 compute-1 sudo[161552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:37 compute-1 sudo[161524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:50:37 compute-1 sudo[161524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:50:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:37.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:37 compute-1 python3.9[161564]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:50:37 compute-1 sudo[161552]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:37 compute-1 sudo[161524]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:37 compute-1 sudo[161745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zixrrsfingymaxsdmepgdlmeibiilfex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003437.5312989-5030-249679790973499/AnsiballZ_copy.py'
Oct 09 09:50:37 compute-1 sudo[161745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:50:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:50:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:50:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:50:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:50:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:50:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:50:37 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Oct 09 09:50:37 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:37.997624) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:50:37 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Oct 09 09:50:37 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003437997707, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4659, "num_deletes": 502, "total_data_size": 12778659, "memory_usage": 12956080, "flush_reason": "Manual Compaction"}
Oct 09 09:50:37 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438012047, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 8291926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13169, "largest_seqno": 17823, "table_properties": {"data_size": 8274219, "index_size": 11961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36680, "raw_average_key_size": 19, "raw_value_size": 8237464, "raw_average_value_size": 4428, "num_data_blocks": 522, "num_entries": 1860, "num_filter_entries": 1860, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002995, "oldest_key_time": 1760002995, "file_creation_time": 1760003437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 14448 microseconds, and 10438 cpu microseconds.
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.012081) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 8291926 bytes OK
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.012093) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.013608) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.013619) EVENT_LOG_v1 {"time_micros": 1760003438013616, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.013631) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 12758120, prev total WAL file size 12758120, number of live WAL files 2.
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.015264) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(8097KB)], [27(11MB)]
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438015287, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 19850120, "oldest_snapshot_seqno": -1}
Oct 09 09:50:38 compute-1 python3.9[161747]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003437.5312989-5030-249679790973499/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:50:38 compute-1 sudo[161745]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4995 keys, 15244332 bytes, temperature: kUnknown
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438058440, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 15244332, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15206185, "index_size": 24533, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 124780, "raw_average_key_size": 24, "raw_value_size": 15110761, "raw_average_value_size": 3025, "num_data_blocks": 1034, "num_entries": 4995, "num_filter_entries": 4995, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760003438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.058600) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15244332 bytes
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.060266) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 459.5 rd, 352.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(7.9, 11.0 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(4.2) write-amplify(1.8) OK, records in: 6018, records dropped: 1023 output_compression: NoCompression
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.060279) EVENT_LOG_v1 {"time_micros": 1760003438060273, "job": 14, "event": "compaction_finished", "compaction_time_micros": 43204, "compaction_time_cpu_micros": 21214, "output_level": 6, "num_output_files": 1, "total_output_size": 15244332, "num_input_records": 6018, "num_output_records": 4995, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438061478, "job": 14, "event": "table_file_deletion", "file_number": 29}
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438063033, "job": 14, "event": "table_file_deletion", "file_number": 27}
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.015218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.063054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.063056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.063058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.063058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.063059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:50:38 compute-1 sudo[161821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfpcmpsiitafmmqbbfbrstxzxconfdeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003437.5312989-5030-249679790973499/AnsiballZ_systemd.py'
Oct 09 09:50:38 compute-1 sudo[161821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:38.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:38 compute-1 python3.9[161823]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:50:38 compute-1 systemd[1]: Reloading.
Oct 09 09:50:38 compute-1 systemd-sysv-generator[161847]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:50:38 compute-1 systemd-rc-local-generator[161844]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:50:38 compute-1 sudo[161821]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:38 compute-1 ceph-mon[9795]: pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:38 compute-1 ceph-mon[9795]: pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:39 compute-1 sudo[161932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azjqcknzjgriidtpurpgcvqwouinuiez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003437.5312989-5030-249679790973499/AnsiballZ_systemd.py'
Oct 09 09:50:39 compute-1 sudo[161932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:39 compute-1 python3.9[161934]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:50:39 compute-1 systemd[1]: Reloading.
Oct 09 09:50:39 compute-1 systemd-rc-local-generator[161962]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:50:39 compute-1 systemd-sysv-generator[161965]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:50:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:39.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:39 compute-1 systemd[1]: Starting nova_compute container...
Oct 09 09:50:39 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:50:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:39 compute-1 podman[161975]: 2025-10-09 09:50:39.652987642 +0000 UTC m=+0.066963588 container init a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 09 09:50:39 compute-1 podman[161975]: 2025-10-09 09:50:39.658515149 +0000 UTC m=+0.072491085 container start a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 09 09:50:39 compute-1 podman[161975]: nova_compute
Oct 09 09:50:39 compute-1 nova_compute[161987]: + sudo -E kolla_set_configs
Oct 09 09:50:39 compute-1 systemd[1]: Started nova_compute container.
Oct 09 09:50:39 compute-1 sudo[161932]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Validating config file
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying service configuration files
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Deleting /etc/ceph
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Creating directory /etc/ceph
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/ceph
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Writing out command to execute
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 09 09:50:39 compute-1 nova_compute[161987]: ++ cat /run_command
Oct 09 09:50:39 compute-1 nova_compute[161987]: + CMD=nova-compute
Oct 09 09:50:39 compute-1 nova_compute[161987]: + ARGS=
Oct 09 09:50:39 compute-1 nova_compute[161987]: + sudo kolla_copy_cacerts
Oct 09 09:50:39 compute-1 nova_compute[161987]: + [[ ! -n '' ]]
Oct 09 09:50:39 compute-1 nova_compute[161987]: + . kolla_extend_start
Oct 09 09:50:39 compute-1 nova_compute[161987]: Running command: 'nova-compute'
Oct 09 09:50:39 compute-1 nova_compute[161987]: + echo 'Running command: '\''nova-compute'\'''
Oct 09 09:50:39 compute-1 nova_compute[161987]: + umask 0022
Oct 09 09:50:39 compute-1 nova_compute[161987]: + exec nova-compute
Oct 09 09:50:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:40.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:40 compute-1 python3.9[162149]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:50:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:41 compute-1 ceph-mon[9795]: pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:41 compute-1 nova_compute[161987]: 2025-10-09 09:50:41.382 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 09 09:50:41 compute-1 nova_compute[161987]: 2025-10-09 09:50:41.382 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 09 09:50:41 compute-1 nova_compute[161987]: 2025-10-09 09:50:41.382 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 09 09:50:41 compute-1 nova_compute[161987]: 2025-10-09 09:50:41.382 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 09 09:50:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:50:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:41.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:50:41 compute-1 python3.9[162299]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:50:41 compute-1 nova_compute[161987]: 2025-10-09 09:50:41.491 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:50:41 compute-1 nova_compute[161987]: 2025-10-09 09:50:41.501 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:50:41 compute-1 sudo[162329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:50:41 compute-1 sudo[162329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:50:41 compute-1 sudo[162329]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.035 2 INFO nova.virt.driver [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.117 2 INFO nova.compute.provider_config [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.128 2 DEBUG oslo_concurrency.lockutils [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.129 2 DEBUG oslo_concurrency.lockutils [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.129 2 DEBUG oslo_concurrency.lockutils [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.129 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.129 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.129 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.131 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.131 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.131 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.131 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.131 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.131 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.131 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.132 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.132 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.132 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.132 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.132 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.132 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.132 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.133 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.133 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.133 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.133 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.133 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.133 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.134 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.134 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.134 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.134 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.134 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.134 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.134 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.135 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.135 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.135 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.135 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.135 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.135 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.135 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.136 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.136 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.136 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.136 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.136 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 python3.9[162479]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.136 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.136 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.137 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.137 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.137 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.137 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.137 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.137 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.137 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.138 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.138 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.138 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.138 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.138 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.138 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.138 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.140 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.140 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.140 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.140 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.140 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.140 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.140 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.141 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.141 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.141 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.141 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.141 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.141 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.141 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.142 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.142 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.142 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.142 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.142 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.142 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.142 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.144 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.144 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.144 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.144 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.144 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.144 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.144 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.146 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.146 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.146 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.146 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.146 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.146 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.146 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.147 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.147 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.147 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.147 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.147 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.147 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.147 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.148 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.148 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.148 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.148 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.148 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.148 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.148 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.150 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.150 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.150 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.150 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.150 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.150 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.151 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.151 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.151 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.151 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.151 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.151 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.151 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.198 2 WARNING oslo_config.cfg [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 09 09:50:42 compute-1 nova_compute[161987]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 09 09:50:42 compute-1 nova_compute[161987]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 09 09:50:42 compute-1 nova_compute[161987]: and ``live_migration_inbound_addr`` respectively.
Oct 09 09:50:42 compute-1 nova_compute[161987]: ).  Its value may be silently ignored in the future.
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rbd_secret_uuid        = 286f8bf0-da72-5823-9a4e-ac4457d9e609 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:42.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.259 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.259 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.272 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.273 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.273 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.273 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 09 09:50:42 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Oct 09 09:50:42 compute-1 systemd[1]: Started libvirt QEMU daemon.
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.335 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fc7f6eb76d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.337 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fc7f6eb76d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.338 2 INFO nova.virt.libvirt.driver [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Connection event '1' reason 'None'
Oct 09 09:50:42 compute-1 podman[162525]: 2025-10-09 09:50:42.347901481 +0000 UTC m=+0.041242262 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.349 2 WARNING nova.virt.libvirt.driver [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Oct 09 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.349 2 DEBUG nova.virt.libvirt.volume.mount [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 09 09:50:42 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:50:42 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:50:42 compute-1 ceph-mon[9795]: pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:42 compute-1 sudo[162698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnwwobipvplceinekwcdwdhznuwjzmbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003442.4656723-5210-262380341804390/AnsiballZ_podman_container.py'
Oct 09 09:50:42 compute-1 sudo[162698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:42 compute-1 python3.9[162700]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 09 09:50:42 compute-1 sudo[162698]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.029 2 INFO nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Libvirt host capabilities <capabilities>
Oct 09 09:50:43 compute-1 nova_compute[161987]: 
Oct 09 09:50:43 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <host>
Oct 09 09:50:43 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <uuid>99ca1aa4-a8fe-49f8-8019-77dd20980206</uuid>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <cpu>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <arch>x86_64</arch>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model>EPYC-Milan-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <vendor>AMD</vendor>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <microcode version='167776725'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <signature family='25' model='1' stepping='1'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <maxphysaddr mode='emulate' bits='48'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='x2apic'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='tsc-deadline'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='osxsave'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='hypervisor'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='tsc_adjust'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='ospke'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='vaes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='vpclmulqdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='spec-ctrl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='stibp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='arch-capabilities'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='ssbd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='cmp_legacy'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='virt-ssbd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='lbrv'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='tsc-scale'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='vmcb-clean'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='pause-filter'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='pfthreshold'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='v-vmsave-vmload'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='vgif'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='rdctl-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='skip-l1dfl-vmentry'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='mds-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature name='pschange-mc-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <pages unit='KiB' size='4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <pages unit='KiB' size='2048'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <pages unit='KiB' size='1048576'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </cpu>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <power_management>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <suspend_mem/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </power_management>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <iommu support='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <migration_features>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <live/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <uri_transports>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <uri_transport>tcp</uri_transport>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <uri_transport>rdma</uri_transport>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </uri_transports>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </migration_features>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <topology>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <cells num='1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <cell id='0'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:           <memory unit='KiB'>7865152</memory>
Oct 09 09:50:43 compute-1 nova_compute[161987]:           <pages unit='KiB' size='4'>1966288</pages>
Oct 09 09:50:43 compute-1 nova_compute[161987]:           <pages unit='KiB' size='2048'>0</pages>
Oct 09 09:50:43 compute-1 nova_compute[161987]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 09 09:50:43 compute-1 nova_compute[161987]:           <distances>
Oct 09 09:50:43 compute-1 nova_compute[161987]:             <sibling id='0' value='10'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:           </distances>
Oct 09 09:50:43 compute-1 nova_compute[161987]:           <cpus num='4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:           </cpus>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         </cell>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </cells>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </topology>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <cache>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </cache>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <secmodel>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model>selinux</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <doi>0</doi>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </secmodel>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <secmodel>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model>dac</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <doi>0</doi>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </secmodel>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </host>
Oct 09 09:50:43 compute-1 nova_compute[161987]: 
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <guest>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <os_type>hvm</os_type>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <arch name='i686'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <wordsize>32</wordsize>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <domain type='qemu'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <domain type='kvm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </arch>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <features>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <pae/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <nonpae/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <acpi default='on' toggle='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <apic default='on' toggle='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <cpuselection/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <deviceboot/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <disksnapshot default='on' toggle='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <externalSnapshot/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </features>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </guest>
Oct 09 09:50:43 compute-1 nova_compute[161987]: 
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <guest>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <os_type>hvm</os_type>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <arch name='x86_64'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <wordsize>64</wordsize>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <domain type='qemu'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <domain type='kvm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </arch>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <features>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <acpi default='on' toggle='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <apic default='on' toggle='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <cpuselection/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <deviceboot/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <disksnapshot default='on' toggle='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <externalSnapshot/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </features>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </guest>
Oct 09 09:50:43 compute-1 nova_compute[161987]: 
Oct 09 09:50:43 compute-1 nova_compute[161987]: </capabilities>
Oct 09 09:50:43 compute-1 nova_compute[161987]: 
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.034 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.050 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 09 09:50:43 compute-1 nova_compute[161987]: <domainCapabilities>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <domain>kvm</domain>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <arch>i686</arch>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <vcpu max='4096'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <iothreads supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <os supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <enum name='firmware'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <loader supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>rom</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>pflash</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='readonly'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>yes</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>no</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='secure'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>no</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </loader>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </os>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <cpu>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>on</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>off</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='maximumMigratable'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>on</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>off</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <vendor>AMD</vendor>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='succor'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='custom' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cooperlake'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Denverton'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Denverton-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='GraniteRapids'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10-128'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10-256'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10-512'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='KnightsMill'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SierraForest'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='athlon'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='athlon-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='core2duo'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='core2duo-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='coreduo'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='coreduo-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='n270'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='n270-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='phenom'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='phenom-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </cpu>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <memoryBacking supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <enum name='sourceType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>file</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>anonymous</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>memfd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </memoryBacking>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <devices>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <disk supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='diskDevice'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>disk</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>cdrom</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>floppy</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>lun</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='bus'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>fdc</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>scsi</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>usb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>sata</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </disk>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <graphics supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vnc</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>egl-headless</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>dbus</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </graphics>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <video supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='modelType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vga</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>cirrus</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>none</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>bochs</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>ramfb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </video>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <hostdev supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='mode'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>subsystem</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='startupPolicy'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>default</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>mandatory</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>requisite</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>optional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='subsysType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>usb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>pci</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>scsi</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='capsType'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='pciBackend'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </hostdev>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <rng supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>random</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>egd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>builtin</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </rng>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <filesystem supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='driverType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>path</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>handle</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtiofs</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </filesystem>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <tpm supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>tpm-tis</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>tpm-crb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>emulator</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>external</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendVersion'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>2.0</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </tpm>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <redirdev supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='bus'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>usb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </redirdev>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <channel supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>pty</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>unix</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </channel>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <crypto supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>qemu</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>builtin</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </crypto>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <interface supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>default</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>passt</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </interface>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <panic supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>isa</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>hyperv</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </panic>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </devices>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <features>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <gic supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <genid supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <backup supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <async-teardown supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <ps2 supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <sev supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <sgx supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <hyperv supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='features'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>relaxed</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vapic</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>spinlocks</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vpindex</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>runtime</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>synic</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>stimer</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>reset</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vendor_id</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>frequencies</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>reenlightenment</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>tlbflush</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>ipi</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>avic</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>emsr_bitmap</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>xmm_input</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </hyperv>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <launchSecurity supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </features>
Oct 09 09:50:43 compute-1 nova_compute[161987]: </domainCapabilities>
Oct 09 09:50:43 compute-1 nova_compute[161987]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.053 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 09 09:50:43 compute-1 nova_compute[161987]: <domainCapabilities>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <domain>kvm</domain>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <arch>i686</arch>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <vcpu max='240'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <iothreads supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <os supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <enum name='firmware'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <loader supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>rom</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>pflash</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='readonly'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>yes</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>no</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='secure'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>no</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </loader>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </os>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <cpu>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>on</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>off</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='maximumMigratable'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>on</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>off</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <vendor>AMD</vendor>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='succor'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='custom' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cooperlake'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Denverton'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Denverton-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='GraniteRapids'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10-128'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10-256'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10-512'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='KnightsMill'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SierraForest'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='athlon'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='athlon-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='core2duo'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='core2duo-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='coreduo'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='coreduo-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='n270'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='n270-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='phenom'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='phenom-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </cpu>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <memoryBacking supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <enum name='sourceType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>file</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>anonymous</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>memfd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </memoryBacking>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <devices>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <disk supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='diskDevice'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>disk</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>cdrom</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>floppy</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>lun</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='bus'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>ide</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>fdc</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>scsi</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>usb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>sata</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </disk>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <graphics supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vnc</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>egl-headless</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>dbus</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </graphics>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <video supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='modelType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vga</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>cirrus</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>none</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>bochs</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>ramfb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </video>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <hostdev supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='mode'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>subsystem</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='startupPolicy'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>default</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>mandatory</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>requisite</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>optional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='subsysType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>usb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>pci</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>scsi</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='capsType'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='pciBackend'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </hostdev>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <rng supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>random</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>egd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>builtin</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </rng>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <filesystem supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='driverType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>path</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>handle</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtiofs</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </filesystem>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <tpm supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>tpm-tis</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>tpm-crb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>emulator</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>external</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendVersion'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>2.0</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </tpm>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <redirdev supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='bus'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>usb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </redirdev>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <channel supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>pty</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>unix</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </channel>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <crypto supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>qemu</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>builtin</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </crypto>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <interface supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>default</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>passt</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </interface>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <panic supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>isa</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>hyperv</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </panic>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </devices>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <features>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <gic supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <genid supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <backup supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <async-teardown supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <ps2 supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <sev supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <sgx supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <hyperv supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='features'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>relaxed</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vapic</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>spinlocks</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vpindex</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>runtime</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>synic</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>stimer</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>reset</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vendor_id</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>frequencies</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>reenlightenment</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>tlbflush</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>ipi</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>avic</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>emsr_bitmap</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>xmm_input</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </hyperv>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <launchSecurity supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </features>
Oct 09 09:50:43 compute-1 nova_compute[161987]: </domainCapabilities>
Oct 09 09:50:43 compute-1 nova_compute[161987]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.078 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.080 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 09 09:50:43 compute-1 nova_compute[161987]: <domainCapabilities>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <domain>kvm</domain>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <arch>x86_64</arch>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <vcpu max='4096'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <iothreads supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <os supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <enum name='firmware'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>efi</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <loader supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>rom</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>pflash</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='readonly'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>yes</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>no</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='secure'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>yes</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>no</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </loader>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </os>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <cpu>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>on</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>off</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='maximumMigratable'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>on</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>off</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <vendor>AMD</vendor>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='succor'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='custom' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cooperlake'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Denverton'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Denverton-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='GraniteRapids'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10-128'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10-256'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10-512'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='KnightsMill'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SierraForest'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='athlon'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='athlon-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='core2duo'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='core2duo-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='coreduo'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='coreduo-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='n270'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='n270-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='phenom'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='phenom-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </cpu>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <memoryBacking supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <enum name='sourceType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>file</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>anonymous</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>memfd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </memoryBacking>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <devices>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <disk supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='diskDevice'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>disk</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>cdrom</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>floppy</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>lun</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='bus'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>fdc</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>scsi</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>usb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>sata</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </disk>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <graphics supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vnc</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>egl-headless</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>dbus</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </graphics>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <video supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='modelType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vga</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>cirrus</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>none</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>bochs</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>ramfb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </video>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <hostdev supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='mode'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>subsystem</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='startupPolicy'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>default</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>mandatory</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>requisite</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>optional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='subsysType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>usb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>pci</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>scsi</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='capsType'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='pciBackend'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </hostdev>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <rng supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>random</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>egd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>builtin</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </rng>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <filesystem supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='driverType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>path</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>handle</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtiofs</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </filesystem>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <tpm supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>tpm-tis</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>tpm-crb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>emulator</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>external</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendVersion'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>2.0</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </tpm>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <redirdev supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='bus'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>usb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </redirdev>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <channel supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>pty</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>unix</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </channel>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <crypto supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>qemu</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>builtin</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </crypto>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <interface supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>default</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>passt</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </interface>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <panic supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>isa</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>hyperv</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </panic>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </devices>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <features>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <gic supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <genid supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <backup supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <async-teardown supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <ps2 supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <sev supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <sgx supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <hyperv supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='features'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>relaxed</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vapic</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>spinlocks</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vpindex</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>runtime</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>synic</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>stimer</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>reset</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vendor_id</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>frequencies</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>reenlightenment</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>tlbflush</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>ipi</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>avic</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>emsr_bitmap</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>xmm_input</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </hyperv>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <launchSecurity supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </features>
Oct 09 09:50:43 compute-1 nova_compute[161987]: </domainCapabilities>
Oct 09 09:50:43 compute-1 nova_compute[161987]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.128 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 09 09:50:43 compute-1 nova_compute[161987]: <domainCapabilities>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <domain>kvm</domain>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <arch>x86_64</arch>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <vcpu max='240'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <iothreads supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <os supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <enum name='firmware'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <loader supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>rom</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>pflash</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='readonly'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>yes</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>no</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='secure'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>no</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </loader>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </os>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <cpu>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>on</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>off</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='maximumMigratable'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>on</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>off</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <vendor>AMD</vendor>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='succor'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <mode name='custom' supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cooperlake'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Denverton'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Denverton-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='GraniteRapids'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10-128'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10-256'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx10-512'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Haswell-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='KnightsMill'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xop'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='la57'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SierraForest'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='hle'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='athlon'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='athlon-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='core2duo'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='core2duo-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='coreduo'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='coreduo-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='n270'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='n270-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='ss'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='phenom'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <blockers model='phenom-v1'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </blockers>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </mode>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </cpu>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <memoryBacking supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <enum name='sourceType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>file</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>anonymous</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <value>memfd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </memoryBacking>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <devices>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <disk supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='diskDevice'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>disk</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>cdrom</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>floppy</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>lun</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='bus'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>ide</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>fdc</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>scsi</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>usb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>sata</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </disk>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <graphics supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vnc</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>egl-headless</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>dbus</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </graphics>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <video supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='modelType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vga</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>cirrus</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>none</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>bochs</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>ramfb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </video>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <hostdev supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='mode'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>subsystem</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='startupPolicy'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>default</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>mandatory</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>requisite</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>optional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='subsysType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>usb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>pci</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>scsi</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='capsType'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='pciBackend'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </hostdev>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <rng supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>random</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>egd</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>builtin</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </rng>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <filesystem supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='driverType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>path</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>handle</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>virtiofs</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </filesystem>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <tpm supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>tpm-tis</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>tpm-crb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>emulator</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>external</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendVersion'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>2.0</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </tpm>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <redirdev supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='bus'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>usb</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </redirdev>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <channel supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>pty</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>unix</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </channel>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <crypto supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='type'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>qemu</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>builtin</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </crypto>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <interface supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='backendType'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>default</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>passt</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </interface>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <panic supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='model'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>isa</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>hyperv</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </panic>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </devices>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   <features>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <gic supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <genid supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <backup supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <async-teardown supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <ps2 supported='yes'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <sev supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <sgx supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <hyperv supported='yes'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       <enum name='features'>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>relaxed</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vapic</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>spinlocks</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vpindex</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>runtime</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>synic</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>stimer</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>reset</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>vendor_id</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>frequencies</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>reenlightenment</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>tlbflush</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>ipi</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>avic</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>emsr_bitmap</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:         <value>xmm_input</value>
Oct 09 09:50:43 compute-1 nova_compute[161987]:       </enum>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     </hyperv>
Oct 09 09:50:43 compute-1 nova_compute[161987]:     <launchSecurity supported='no'/>
Oct 09 09:50:43 compute-1 nova_compute[161987]:   </features>
Oct 09 09:50:43 compute-1 nova_compute[161987]: </domainCapabilities>
Oct 09 09:50:43 compute-1 nova_compute[161987]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.164 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.165 2 INFO nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Secure Boot support detected
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.166 2 INFO nova.virt.libvirt.driver [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.166 2 INFO nova.virt.libvirt.driver [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.174 2 DEBUG nova.virt.libvirt.driver [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.202 2 INFO nova.virt.node [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Determined node identity 79aa81b0-5a5d-4643-a355-ec5461cb321a from /var/lib/nova/compute_id
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.221 2 WARNING nova.compute.manager [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Compute nodes ['79aa81b0-5a5d-4643-a355-ec5461cb321a'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.255 2 INFO nova.compute.manager [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.289 2 WARNING nova.compute.manager [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.290 2 DEBUG oslo_concurrency.lockutils [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.290 2 DEBUG oslo_concurrency.lockutils [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.290 2 DEBUG oslo_concurrency.lockutils [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.290 2 DEBUG nova.compute.resource_tracker [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.290 2 DEBUG oslo_concurrency.processutils [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:50:43 compute-1 sudo[162882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxxdsnftzjyuwrcmppicyozvblzzjvma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003443.1479714-5234-6103106829883/AnsiballZ_systemd.py'
Oct 09 09:50:43 compute-1 sudo[162882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:43.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:43 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:50:43 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/500541388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:43 compute-1 python3.9[162889]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.653 2 DEBUG oslo_concurrency.processutils [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:50:43 compute-1 systemd[1]: Stopping nova_compute container...
Oct 09 09:50:43 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Oct 09 09:50:43 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/500541388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:43 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2933965568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:43 compute-1 systemd[1]: Started libvirt nodedev daemon.
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.704 2 DEBUG oslo_concurrency.lockutils [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.704 2 DEBUG oslo_concurrency.lockutils [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.704 2 DEBUG oslo_concurrency.lockutils [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:50:44 compute-1 virtqemud[162526]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 09 09:50:44 compute-1 virtqemud[162526]: hostname: compute-1
Oct 09 09:50:44 compute-1 virtqemud[162526]: End of file while reading data: Input/output error
Oct 09 09:50:44 compute-1 systemd[1]: libpod-a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2.scope: Deactivated successfully.
Oct 09 09:50:44 compute-1 systemd[1]: libpod-a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2.scope: Consumed 2.739s CPU time.
Oct 09 09:50:44 compute-1 podman[162909]: 2025-10-09 09:50:44.058988758 +0000 UTC m=+0.382134737 container died a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3)
Oct 09 09:50:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d-merged.mount: Deactivated successfully.
Oct 09 09:50:44 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2-userdata-shm.mount: Deactivated successfully.
Oct 09 09:50:44 compute-1 podman[162909]: 2025-10-09 09:50:44.115746175 +0000 UTC m=+0.438892154 container cleanup a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute)
Oct 09 09:50:44 compute-1 podman[162909]: nova_compute
Oct 09 09:50:44 compute-1 podman[162952]: nova_compute
Oct 09 09:50:44 compute-1 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 09 09:50:44 compute-1 systemd[1]: Stopped nova_compute container.
Oct 09 09:50:44 compute-1 systemd[1]: Starting nova_compute container...
Oct 09 09:50:44 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:50:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:44 compute-1 podman[162962]: 2025-10-09 09:50:44.233956234 +0000 UTC m=+0.060029979 container init a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:50:44 compute-1 podman[162962]: 2025-10-09 09:50:44.238872467 +0000 UTC m=+0.064946212 container start a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, io.buildah.version=1.41.3)
Oct 09 09:50:44 compute-1 podman[162962]: nova_compute
Oct 09 09:50:44 compute-1 nova_compute[162974]: + sudo -E kolla_set_configs
Oct 09 09:50:44 compute-1 systemd[1]: Started nova_compute container.
Oct 09 09:50:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:44.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:44 compute-1 sudo[162882]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Validating config file
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying service configuration files
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /etc/ceph
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Creating directory /etc/ceph
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/ceph
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Writing out command to execute
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 09 09:50:44 compute-1 nova_compute[162974]: ++ cat /run_command
Oct 09 09:50:44 compute-1 nova_compute[162974]: + CMD=nova-compute
Oct 09 09:50:44 compute-1 nova_compute[162974]: + ARGS=
Oct 09 09:50:44 compute-1 nova_compute[162974]: + sudo kolla_copy_cacerts
Oct 09 09:50:44 compute-1 nova_compute[162974]: + [[ ! -n '' ]]
Oct 09 09:50:44 compute-1 nova_compute[162974]: + . kolla_extend_start
Oct 09 09:50:44 compute-1 nova_compute[162974]: Running command: 'nova-compute'
Oct 09 09:50:44 compute-1 nova_compute[162974]: + echo 'Running command: '\''nova-compute'\'''
Oct 09 09:50:44 compute-1 nova_compute[162974]: + umask 0022
Oct 09 09:50:44 compute-1 nova_compute[162974]: + exec nova-compute
Oct 09 09:50:44 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/331903618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:44 compute-1 ceph-mon[9795]: pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:44 compute-1 sudo[163135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bowziwodzlpupcoxqhspaiuslelvykwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003444.5089335-5261-150029137126982/AnsiballZ_podman_container.py'
Oct 09 09:50:44 compute-1 sudo[163135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:44 compute-1 python3.9[163137]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 09 09:50:45 compute-1 systemd[1]: Started libpod-conmon-8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a.scope.
Oct 09 09:50:45 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:50:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f9119592c662942c6340251e4b10b313c1c11314b53de3faa1a0bee718f28f/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f9119592c662942c6340251e4b10b313c1c11314b53de3faa1a0bee718f28f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f9119592c662942c6340251e4b10b313c1c11314b53de3faa1a0bee718f28f/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:45 compute-1 podman[163158]: 2025-10-09 09:50:45.037307732 +0000 UTC m=+0.072093094 container init 8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, managed_by=edpm_ansible)
Oct 09 09:50:45 compute-1 podman[163158]: 2025-10-09 09:50:45.043237186 +0000 UTC m=+0.078022537 container start 8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=nova_compute_init, managed_by=edpm_ansible)
Oct 09 09:50:45 compute-1 python3.9[163137]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 09 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Applying nova statedir ownership
Oct 09 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 09 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 09 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 09 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 09 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 09 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 09 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 09 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 09 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 09 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 09 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 09 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Nova statedir ownership complete
Oct 09 09:50:45 compute-1 systemd[1]: libpod-8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a.scope: Deactivated successfully.
Oct 09 09:50:45 compute-1 conmon[163170]: conmon 8aaa249df0d42f42b8a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a.scope/container/memory.events
Oct 09 09:50:45 compute-1 podman[163190]: 2025-10-09 09:50:45.130843169 +0000 UTC m=+0.024810729 container died 8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.build-date=20251001)
Oct 09 09:50:45 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a-userdata-shm.mount: Deactivated successfully.
Oct 09 09:50:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-c8f9119592c662942c6340251e4b10b313c1c11314b53de3faa1a0bee718f28f-merged.mount: Deactivated successfully.
Oct 09 09:50:45 compute-1 sudo[163135]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:45 compute-1 podman[163190]: 2025-10-09 09:50:45.159659223 +0000 UTC m=+0.053626762 container cleanup 8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Oct 09 09:50:45 compute-1 podman[163189]: 2025-10-09 09:50:45.160588535 +0000 UTC m=+0.048939340 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 09 09:50:45 compute-1 systemd[1]: libpod-conmon-8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a.scope: Deactivated successfully.
Oct 09 09:50:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:45.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:45 compute-1 sshd-session[127762]: Connection closed by 192.168.122.30 port 59730
Oct 09 09:50:45 compute-1 sshd-session[127759]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:50:45 compute-1 systemd[1]: session-37.scope: Deactivated successfully.
Oct 09 09:50:45 compute-1 systemd[1]: session-37.scope: Consumed 1min 59.368s CPU time.
Oct 09 09:50:45 compute-1 systemd-logind[798]: Session 37 logged out. Waiting for processes to exit.
Oct 09 09:50:45 compute-1 systemd-logind[798]: Removed session 37.
Oct 09 09:50:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:45 compute-1 nova_compute[162974]: 2025-10-09 09:50:45.990 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 09 09:50:45 compute-1 nova_compute[162974]: 2025-10-09 09:50:45.990 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 09 09:50:45 compute-1 nova_compute[162974]: 2025-10-09 09:50:45.991 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 09 09:50:45 compute-1 nova_compute[162974]: 2025-10-09 09:50:45.991 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.095 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.105 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:50:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:46.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.513 2 INFO nova.virt.driver [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.592 2 INFO nova.compute.provider_config [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.600 2 DEBUG oslo_concurrency.lockutils [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.600 2 DEBUG oslo_concurrency.lockutils [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.600 2 DEBUG oslo_concurrency.lockutils [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.601 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.601 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.601 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.601 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.601 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.601 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.602 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.602 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.602 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.602 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.602 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.602 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.603 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.603 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.603 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.603 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.603 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.604 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.604 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.604 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.604 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.604 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.604 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.605 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.605 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.605 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.605 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.605 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.606 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.606 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.606 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.606 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.606 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.606 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.607 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.607 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.607 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.607 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.607 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.607 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.608 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.608 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.608 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.608 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.608 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.609 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.609 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.609 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.609 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.609 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.609 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.610 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.610 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.610 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.610 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.610 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.611 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.611 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.611 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.611 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.611 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.611 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.611 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.612 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.612 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.612 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.612 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.612 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.612 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.613 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.613 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.613 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.613 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.613 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.614 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.614 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.614 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.614 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.614 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.614 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.615 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.615 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.615 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.615 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.615 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.616 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.616 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.616 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.616 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.616 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.616 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.617 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.617 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.617 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.617 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.617 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.617 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.618 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.618 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.618 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.618 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.618 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.618 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.619 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.619 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.619 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.619 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.619 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.620 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.620 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.620 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.620 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.620 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.620 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.621 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.621 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.621 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.621 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.621 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.622 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.622 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.622 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.622 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.622 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.622 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.623 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.623 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.623 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.623 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.623 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.623 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.624 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.624 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.624 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.624 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.624 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.625 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.625 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.625 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.625 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.625 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.625 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.626 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.626 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.626 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.626 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.626 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.627 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.627 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.627 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.627 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.627 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.627 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.628 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.628 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.628 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.628 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.628 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.629 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.629 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.629 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.629 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.629 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.629 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.630 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.630 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.630 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.630 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.630 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.631 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.631 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.631 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.631 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.631 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.631 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.632 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.632 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.632 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.632 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.632 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.633 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.633 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.633 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.633 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.633 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.633 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.634 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.634 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.634 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.634 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.634 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.634 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.635 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.635 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.635 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.635 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.635 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.636 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.636 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.636 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.636 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.636 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.636 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.637 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.637 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.637 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.637 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.637 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.637 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.638 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.638 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.638 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.638 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.638 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.639 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.639 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.639 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.639 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.639 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.639 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.640 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.640 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.640 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.640 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.640 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.640 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.641 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.641 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.641 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.641 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.641 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.641 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.642 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.642 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.642 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.642 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.642 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.643 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.643 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.643 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.643 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.643 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.643 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.644 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.644 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.644 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.644 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.644 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.644 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.645 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.645 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.645 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.645 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.645 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.645 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.646 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.646 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.646 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.646 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.646 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.646 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.647 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.647 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.647 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.647 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.647 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.648 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.648 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.648 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.648 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.648 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.648 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.649 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.649 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.649 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.649 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.649 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.650 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.650 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.650 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.650 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.650 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.650 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.651 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.651 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.651 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.651 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.651 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.651 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.652 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.652 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.652 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.652 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.652 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.653 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.653 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.653 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.653 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.653 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.653 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.654 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.654 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.654 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.654 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.654 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.654 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.687 2 WARNING oslo_config.cfg [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 09 09:50:46 compute-1 nova_compute[162974]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 09 09:50:46 compute-1 nova_compute[162974]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 09 09:50:46 compute-1 nova_compute[162974]: and ``live_migration_inbound_addr`` respectively.
Oct 09 09:50:46 compute-1 nova_compute[162974]: ).  Its value may be silently ignored in the future.
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rbd_secret_uuid        = 286f8bf0-da72-5823-9a4e-ac4457d9e609 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.762 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.774 2 INFO nova.virt.node [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Determined node identity 79aa81b0-5a5d-4643-a355-ec5461cb321a from /var/lib/nova/compute_id
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.774 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.775 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.775 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.775 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.784 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4d35e92f70> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.787 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4d35e92f70> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.788 2 INFO nova.virt.libvirt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Connection event '1' reason 'None'
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.791 2 INFO nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Libvirt host capabilities <capabilities>
Oct 09 09:50:46 compute-1 nova_compute[162974]: 
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <host>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <uuid>99ca1aa4-a8fe-49f8-8019-77dd20980206</uuid>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <cpu>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <arch>x86_64</arch>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model>EPYC-Milan-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <vendor>AMD</vendor>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <microcode version='167776725'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <signature family='25' model='1' stepping='1'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <maxphysaddr mode='emulate' bits='48'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='x2apic'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='tsc-deadline'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='osxsave'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='hypervisor'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='tsc_adjust'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='ospke'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='vaes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='vpclmulqdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='spec-ctrl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='stibp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='arch-capabilities'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='ssbd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='cmp_legacy'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='virt-ssbd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='lbrv'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='tsc-scale'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='vmcb-clean'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='pause-filter'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='pfthreshold'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='v-vmsave-vmload'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='vgif'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='rdctl-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='skip-l1dfl-vmentry'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='mds-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature name='pschange-mc-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <pages unit='KiB' size='4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <pages unit='KiB' size='2048'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <pages unit='KiB' size='1048576'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </cpu>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <power_management>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <suspend_mem/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </power_management>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <iommu support='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <migration_features>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <live/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <uri_transports>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <uri_transport>tcp</uri_transport>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <uri_transport>rdma</uri_transport>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </uri_transports>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </migration_features>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <topology>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <cells num='1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <cell id='0'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:           <memory unit='KiB'>7865152</memory>
Oct 09 09:50:46 compute-1 nova_compute[162974]:           <pages unit='KiB' size='4'>1966288</pages>
Oct 09 09:50:46 compute-1 nova_compute[162974]:           <pages unit='KiB' size='2048'>0</pages>
Oct 09 09:50:46 compute-1 nova_compute[162974]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 09 09:50:46 compute-1 nova_compute[162974]:           <distances>
Oct 09 09:50:46 compute-1 nova_compute[162974]:             <sibling id='0' value='10'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:           </distances>
Oct 09 09:50:46 compute-1 nova_compute[162974]:           <cpus num='4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:           </cpus>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         </cell>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </cells>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </topology>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <cache>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </cache>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <secmodel>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model>selinux</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <doi>0</doi>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </secmodel>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <secmodel>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model>dac</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <doi>0</doi>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </secmodel>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </host>
Oct 09 09:50:46 compute-1 nova_compute[162974]: 
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <guest>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <os_type>hvm</os_type>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <arch name='i686'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <wordsize>32</wordsize>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <domain type='qemu'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <domain type='kvm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </arch>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <features>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <pae/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <nonpae/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <acpi default='on' toggle='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <apic default='on' toggle='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <cpuselection/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <deviceboot/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <disksnapshot default='on' toggle='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <externalSnapshot/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </features>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </guest>
Oct 09 09:50:46 compute-1 nova_compute[162974]: 
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <guest>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <os_type>hvm</os_type>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <arch name='x86_64'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <wordsize>64</wordsize>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <domain type='qemu'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <domain type='kvm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </arch>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <features>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <acpi default='on' toggle='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <apic default='on' toggle='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <cpuselection/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <deviceboot/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <disksnapshot default='on' toggle='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <externalSnapshot/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </features>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </guest>
Oct 09 09:50:46 compute-1 nova_compute[162974]: 
Oct 09 09:50:46 compute-1 nova_compute[162974]: </capabilities>
Oct 09 09:50:46 compute-1 nova_compute[162974]: 
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.797 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.798 2 DEBUG nova.virt.libvirt.volume.mount [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.801 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 09 09:50:46 compute-1 nova_compute[162974]: <domainCapabilities>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <domain>kvm</domain>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <arch>i686</arch>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <vcpu max='4096'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <iothreads supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <os supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <enum name='firmware'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <loader supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>rom</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>pflash</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='readonly'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>yes</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>no</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='secure'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>no</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </loader>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </os>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <cpu>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>on</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>off</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='maximumMigratable'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>on</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>off</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <vendor>AMD</vendor>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='succor'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='custom' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cooperlake'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Denverton'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Denverton-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='GraniteRapids'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10-128'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10-256'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10-512'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='KnightsMill'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SierraForest'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='athlon'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='athlon-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='core2duo'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='core2duo-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='coreduo'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='coreduo-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='n270'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='n270-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='phenom'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='phenom-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </cpu>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <memoryBacking supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <enum name='sourceType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>file</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>anonymous</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>memfd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </memoryBacking>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <devices>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <disk supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='diskDevice'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>disk</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>cdrom</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>floppy</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>lun</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='bus'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>fdc</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>scsi</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>usb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>sata</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <graphics supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vnc</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>egl-headless</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>dbus</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </graphics>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <video supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='modelType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vga</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>cirrus</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>none</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>bochs</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>ramfb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </video>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <hostdev supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='mode'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>subsystem</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='startupPolicy'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>default</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>mandatory</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>requisite</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>optional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='subsysType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>usb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>pci</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>scsi</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='capsType'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='pciBackend'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </hostdev>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <rng supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>random</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>egd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>builtin</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </rng>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <filesystem supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='driverType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>path</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>handle</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtiofs</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </filesystem>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <tpm supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>tpm-tis</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>tpm-crb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>emulator</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>external</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendVersion'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>2.0</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </tpm>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <redirdev supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='bus'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>usb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </redirdev>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <channel supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>pty</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>unix</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </channel>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <crypto supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>qemu</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>builtin</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </crypto>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <interface supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>default</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>passt</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </interface>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <panic supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>isa</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>hyperv</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </panic>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </devices>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <features>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <gic supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <genid supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <backup supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <async-teardown supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <ps2 supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <sev supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <sgx supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <hyperv supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='features'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>relaxed</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vapic</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>spinlocks</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vpindex</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>runtime</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>synic</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>stimer</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>reset</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vendor_id</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>frequencies</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>reenlightenment</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>tlbflush</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>ipi</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>avic</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>emsr_bitmap</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>xmm_input</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </hyperv>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <launchSecurity supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </features>
Oct 09 09:50:46 compute-1 nova_compute[162974]: </domainCapabilities>
Oct 09 09:50:46 compute-1 nova_compute[162974]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.804 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 09 09:50:46 compute-1 nova_compute[162974]: <domainCapabilities>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <domain>kvm</domain>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <arch>i686</arch>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <vcpu max='240'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <iothreads supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <os supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <enum name='firmware'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <loader supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>rom</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>pflash</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='readonly'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>yes</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>no</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='secure'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>no</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </loader>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </os>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <cpu>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>on</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>off</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='maximumMigratable'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>on</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>off</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <vendor>AMD</vendor>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='succor'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='custom' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cooperlake'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Denverton'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Denverton-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='GraniteRapids'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10-128'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10-256'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10-512'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 ceph-mon[9795]: pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='KnightsMill'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SierraForest'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='athlon'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='athlon-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='core2duo'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='core2duo-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='coreduo'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='coreduo-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='n270'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='n270-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='phenom'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='phenom-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </cpu>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <memoryBacking supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <enum name='sourceType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>file</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>anonymous</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>memfd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </memoryBacking>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <devices>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <disk supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='diskDevice'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>disk</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>cdrom</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>floppy</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>lun</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='bus'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>ide</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>fdc</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>scsi</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>usb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>sata</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <graphics supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vnc</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>egl-headless</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>dbus</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </graphics>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <video supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='modelType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vga</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>cirrus</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>none</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>bochs</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>ramfb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </video>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <hostdev supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='mode'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>subsystem</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='startupPolicy'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>default</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>mandatory</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>requisite</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>optional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='subsysType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>usb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>pci</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>scsi</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='capsType'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='pciBackend'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </hostdev>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <rng supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>random</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>egd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>builtin</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </rng>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <filesystem supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='driverType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>path</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>handle</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtiofs</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </filesystem>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <tpm supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>tpm-tis</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>tpm-crb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>emulator</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>external</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendVersion'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>2.0</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </tpm>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <redirdev supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='bus'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>usb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </redirdev>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <channel supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>pty</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>unix</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </channel>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <crypto supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>qemu</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>builtin</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </crypto>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <interface supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>default</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>passt</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </interface>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <panic supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>isa</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>hyperv</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </panic>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </devices>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <features>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <gic supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <genid supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <backup supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <async-teardown supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <ps2 supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <sev supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <sgx supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <hyperv supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='features'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>relaxed</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vapic</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>spinlocks</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vpindex</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>runtime</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>synic</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>stimer</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>reset</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vendor_id</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>frequencies</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>reenlightenment</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>tlbflush</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>ipi</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>avic</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>emsr_bitmap</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>xmm_input</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </hyperv>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <launchSecurity supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </features>
Oct 09 09:50:46 compute-1 nova_compute[162974]: </domainCapabilities>
Oct 09 09:50:46 compute-1 nova_compute[162974]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.817 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.820 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 09 09:50:46 compute-1 nova_compute[162974]: <domainCapabilities>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <domain>kvm</domain>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <arch>x86_64</arch>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <vcpu max='4096'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <iothreads supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <os supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <enum name='firmware'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>efi</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <loader supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>rom</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>pflash</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='readonly'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>yes</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>no</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='secure'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>yes</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>no</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </loader>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </os>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <cpu>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>on</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>off</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='maximumMigratable'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>on</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>off</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <vendor>AMD</vendor>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='succor'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='custom' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cooperlake'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Denverton'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Denverton-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='GraniteRapids'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10-128'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10-256'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10-512'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='KnightsMill'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SierraForest'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='athlon'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='athlon-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='core2duo'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='core2duo-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='coreduo'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='coreduo-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='n270'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='n270-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='phenom'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='phenom-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </cpu>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <memoryBacking supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <enum name='sourceType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>file</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>anonymous</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>memfd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </memoryBacking>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <devices>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <disk supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='diskDevice'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>disk</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>cdrom</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>floppy</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>lun</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='bus'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>fdc</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>scsi</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>usb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>sata</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <graphics supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vnc</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>egl-headless</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>dbus</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </graphics>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <video supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='modelType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vga</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>cirrus</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>none</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>bochs</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>ramfb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </video>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <hostdev supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='mode'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>subsystem</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='startupPolicy'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>default</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>mandatory</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>requisite</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>optional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='subsysType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>usb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>pci</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>scsi</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='capsType'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='pciBackend'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </hostdev>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <rng supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>random</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>egd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>builtin</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </rng>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <filesystem supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='driverType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>path</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>handle</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtiofs</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </filesystem>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <tpm supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>tpm-tis</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>tpm-crb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>emulator</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>external</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendVersion'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>2.0</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </tpm>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <redirdev supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='bus'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>usb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </redirdev>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <channel supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>pty</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>unix</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </channel>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <crypto supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>qemu</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>builtin</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </crypto>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <interface supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>default</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>passt</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </interface>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <panic supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>isa</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>hyperv</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </panic>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </devices>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <features>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <gic supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <genid supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <backup supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <async-teardown supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <ps2 supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <sev supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <sgx supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <hyperv supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='features'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>relaxed</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vapic</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>spinlocks</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vpindex</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>runtime</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>synic</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>stimer</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>reset</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vendor_id</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>frequencies</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>reenlightenment</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>tlbflush</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>ipi</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>avic</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>emsr_bitmap</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>xmm_input</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </hyperv>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <launchSecurity supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </features>
Oct 09 09:50:46 compute-1 nova_compute[162974]: </domainCapabilities>
Oct 09 09:50:46 compute-1 nova_compute[162974]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.866 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 09 09:50:46 compute-1 nova_compute[162974]: <domainCapabilities>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <domain>kvm</domain>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <arch>x86_64</arch>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <vcpu max='240'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <iothreads supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <os supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <enum name='firmware'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <loader supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>rom</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>pflash</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='readonly'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>yes</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>no</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='secure'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>no</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </loader>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </os>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <cpu>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>on</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>off</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='maximumMigratable'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>on</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>off</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <vendor>AMD</vendor>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='succor'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <mode name='custom' supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cooperlake'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Denverton'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Denverton-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='GraniteRapids'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10-128'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10-256'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx10-512'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Haswell-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='KnightsMill'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xop'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='la57'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SierraForest'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='hle'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='athlon'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='athlon-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='core2duo'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='core2duo-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='coreduo'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='coreduo-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='n270'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='n270-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='ss'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='phenom'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <blockers model='phenom-v1'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </blockers>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </mode>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </cpu>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <memoryBacking supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <enum name='sourceType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>file</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>anonymous</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <value>memfd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </memoryBacking>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <devices>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <disk supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='diskDevice'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>disk</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>cdrom</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>floppy</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>lun</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='bus'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>ide</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>fdc</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>scsi</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>usb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>sata</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <graphics supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vnc</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>egl-headless</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>dbus</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </graphics>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <video supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='modelType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vga</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>cirrus</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>none</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>bochs</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>ramfb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </video>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <hostdev supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='mode'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>subsystem</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='startupPolicy'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>default</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>mandatory</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>requisite</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>optional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='subsysType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>usb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>pci</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>scsi</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='capsType'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='pciBackend'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </hostdev>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <rng supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>random</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>egd</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>builtin</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </rng>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <filesystem supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='driverType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>path</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>handle</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>virtiofs</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </filesystem>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <tpm supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>tpm-tis</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>tpm-crb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>emulator</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>external</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendVersion'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>2.0</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </tpm>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <redirdev supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='bus'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>usb</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </redirdev>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <channel supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>pty</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>unix</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </channel>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <crypto supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='type'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>qemu</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>builtin</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </crypto>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <interface supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='backendType'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>default</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>passt</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </interface>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <panic supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='model'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>isa</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>hyperv</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </panic>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </devices>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   <features>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <gic supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <genid supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <backup supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <async-teardown supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <ps2 supported='yes'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <sev supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <sgx supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <hyperv supported='yes'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       <enum name='features'>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>relaxed</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vapic</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>spinlocks</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vpindex</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>runtime</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>synic</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>stimer</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>reset</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>vendor_id</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>frequencies</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>reenlightenment</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>tlbflush</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>ipi</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>avic</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>emsr_bitmap</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:         <value>xmm_input</value>
Oct 09 09:50:46 compute-1 nova_compute[162974]:       </enum>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     </hyperv>
Oct 09 09:50:46 compute-1 nova_compute[162974]:     <launchSecurity supported='no'/>
Oct 09 09:50:46 compute-1 nova_compute[162974]:   </features>
Oct 09 09:50:46 compute-1 nova_compute[162974]: </domainCapabilities>
Oct 09 09:50:46 compute-1 nova_compute[162974]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.916 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.916 2 INFO nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Secure Boot support detected
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.917 2 INFO nova.virt.libvirt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.918 2 INFO nova.virt.libvirt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.923 2 DEBUG nova.virt.libvirt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.953 2 INFO nova.virt.node [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Determined node identity 79aa81b0-5a5d-4643-a355-ec5461cb321a from /var/lib/nova/compute_id
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.961 2 WARNING nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Compute nodes ['79aa81b0-5a5d-4643-a355-ec5461cb321a'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.978 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.988 2 WARNING nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.988 2 DEBUG oslo_concurrency.lockutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.988 2 DEBUG oslo_concurrency.lockutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.988 2 DEBUG oslo_concurrency.lockutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.988 2 DEBUG nova.compute.resource_tracker [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.989 2 DEBUG oslo_concurrency.processutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:50:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:50:47 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1952717457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.336 2 DEBUG oslo_concurrency.processutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:50:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:47.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.528 2 WARNING nova.virt.libvirt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.529 2 DEBUG nova.compute.resource_tracker [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5360MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.529 2 DEBUG oslo_concurrency.lockutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.529 2 DEBUG oslo_concurrency.lockutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.556 2 WARNING nova.compute.resource_tracker [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] No compute node record for compute-1.ctlplane.example.com:79aa81b0-5a5d-4643-a355-ec5461cb321a: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 79aa81b0-5a5d-4643-a355-ec5461cb321a could not be found.
Oct 09 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.578 2 INFO nova.compute.resource_tracker [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: 79aa81b0-5a5d-4643-a355-ec5461cb321a
Oct 09 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.664 2 DEBUG nova.compute.resource_tracker [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.664 2 DEBUG nova.compute.resource_tracker [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:50:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1952717457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/173428441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2700888579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.134 2 INFO nova.scheduler.client.report [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [req-5dda1e5e-310a-42a6-a769-80397a654cd1] Created resource provider record via placement API for resource provider with UUID 79aa81b0-5a5d-4643-a355-ec5461cb321a and name compute-1.ctlplane.example.com.
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.186 2 DEBUG oslo_concurrency.processutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:50:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:48.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:50:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2233634120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.532 2 DEBUG oslo_concurrency.processutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.535 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 09 09:50:48 compute-1 nova_compute[162974]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.535 2 INFO nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] kernel doesn't support AMD SEV
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.536 2 DEBUG nova.compute.provider_tree [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Updating inventory in ProviderTree for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.537 2 DEBUG nova.virt.libvirt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.574 2 DEBUG nova.scheduler.client.report [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Updated inventory for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.574 2 DEBUG nova.compute.provider_tree [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Updating resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.574 2 DEBUG nova.compute.provider_tree [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Updating inventory in ProviderTree for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.658 2 DEBUG nova.compute.provider_tree [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Updating resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.678 2 DEBUG nova.compute.resource_tracker [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.679 2 DEBUG oslo_concurrency.lockutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.679 2 DEBUG nova.service [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.735 2 DEBUG nova.service [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 09 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.735 2 DEBUG nova.servicegroup.drivers.db [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 09 09:50:48 compute-1 ceph-mon[9795]: pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2233634120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2871433844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4019565509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:50:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:49.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:50:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:50:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:50.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:50 compute-1 sudo[163326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:50:50 compute-1 sudo[163326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:50:50 compute-1 sudo[163326]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:50 compute-1 ceph-mon[9795]: pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:51.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:50:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:52.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:50:52 compute-1 ceph-mon[9795]: pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:53.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:54.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:54 compute-1 podman[163353]: 2025-10-09 09:50:54.549890436 +0000 UTC m=+0.056505193 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 09 09:50:54 compute-1 ceph-mon[9795]: pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:55.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:55 compute-1 systemd[1]: Stopping User Manager for UID 1000...
Oct 09 09:50:55 compute-1 systemd[1268]: Activating special unit Exit the Session...
Oct 09 09:50:55 compute-1 systemd[1268]: Removed slice User Background Tasks Slice.
Oct 09 09:50:55 compute-1 systemd[1268]: Stopped target Main User Target.
Oct 09 09:50:55 compute-1 systemd[1268]: Stopped target Basic System.
Oct 09 09:50:55 compute-1 systemd[1268]: Stopped target Paths.
Oct 09 09:50:55 compute-1 systemd[1268]: Stopped target Sockets.
Oct 09 09:50:55 compute-1 systemd[1268]: Stopped target Timers.
Oct 09 09:50:55 compute-1 systemd[1268]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 09 09:50:55 compute-1 systemd[1268]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 09:50:55 compute-1 systemd[1268]: Closed D-Bus User Message Bus Socket.
Oct 09 09:50:55 compute-1 systemd[1268]: Stopped Create User's Volatile Files and Directories.
Oct 09 09:50:55 compute-1 systemd[1268]: Removed slice User Application Slice.
Oct 09 09:50:55 compute-1 systemd[1268]: Reached target Shutdown.
Oct 09 09:50:55 compute-1 systemd[1268]: Finished Exit the Session.
Oct 09 09:50:55 compute-1 systemd[1268]: Reached target Exit the Session.
Oct 09 09:50:55 compute-1 systemd[1]: user@1000.service: Deactivated successfully.
Oct 09 09:50:55 compute-1 systemd[1]: Stopped User Manager for UID 1000.
Oct 09 09:50:55 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/1000...
Oct 09 09:50:55 compute-1 systemd[1]: run-user-1000.mount: Deactivated successfully.
Oct 09 09:50:55 compute-1 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully.
Oct 09 09:50:55 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/1000.
Oct 09 09:50:55 compute-1 systemd[1]: Removed slice User Slice of UID 1000.
Oct 09 09:50:55 compute-1 systemd[1]: user-1000.slice: Consumed 8min 7.321s CPU time.
Oct 09 09:50:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:56.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:56 compute-1 ceph-mon[9795]: pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:50:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:57.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:58.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:58 compute-1 ceph-mon[9795]: pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:50:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:59.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:00.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:00 compute-1 ceph-mon[9795]: pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:51:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:01.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:02.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:02 compute-1 ceph-mon[9795]: pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:51:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:03.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:04.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:04 compute-1 podman[163383]: 2025-10-09 09:51:04.547317225 +0000 UTC m=+0.060192457 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:51:04 compute-1 ceph-mon[9795]: pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:51:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:51:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:05.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:06.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:06 compute-1 ceph-mon[9795]: pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:51:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:07.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:08.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:08 compute-1 ceph-mon[9795]: pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 09 09:51:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2195742608' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 09 09:51:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2195742608' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:09.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 09 09:51:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2714317801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 09 09:51:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2714317801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/2195742608' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/2195742608' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/2714317801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/2714317801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/1379833168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/1379833168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:51:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:51:10.029 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:51:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:51:10.029 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:51:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:51:10.029 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:51:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:10.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:10 compute-1 sudo[163403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:51:10 compute-1 sudo[163403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:51:10 compute-1 sudo[163403]: pam_unix(sudo:session): session closed for user root
Oct 09 09:51:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:10 compute-1 ceph-mon[9795]: pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:11.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:12.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:12 compute-1 podman[163429]: 2025-10-09 09:51:12.525454031 +0000 UTC m=+0.038029732 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 09:51:12 compute-1 ceph-mon[9795]: pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:51:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:13.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:14.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:14 compute-1 ceph-mon[9795]: pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:15.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:15 compute-1 podman[163447]: 2025-10-09 09:51:15.554756066 +0000 UTC m=+0.067290865 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:51:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:16.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:16 compute-1 ceph-mon[9795]: pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:51:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:17.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:18.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:19 compute-1 ceph-mon[9795]: pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:19.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:51:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:20.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:21 compute-1 ceph-mon[9795]: pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:21.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:22.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:23 compute-1 ceph-mon[9795]: pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:51:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:23.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:24.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:25 compute-1 ceph-mon[9795]: pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:25.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:25 compute-1 podman[163470]: 2025-10-09 09:51:25.55226529 +0000 UTC m=+0.065315009 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 09 09:51:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:26.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:27 compute-1 ceph-mon[9795]: pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:51:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:27.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:28.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:28 compute-1 nova_compute[162974]: 2025-10-09 09:51:28.736 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:28 compute-1 nova_compute[162974]: 2025-10-09 09:51:28.761 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:29 compute-1 ceph-mon[9795]: pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:29.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:30.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:30 compute-1 sudo[163495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:51:30 compute-1 sudo[163495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:51:30 compute-1 sudo[163495]: pam_unix(sudo:session): session closed for user root
Oct 09 09:51:31 compute-1 ceph-mon[9795]: pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.232192) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491232216, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 759, "num_deletes": 250, "total_data_size": 1519242, "memory_usage": 1544856, "flush_reason": "Manual Compaction"}
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491234754, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 675072, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17828, "largest_seqno": 18582, "table_properties": {"data_size": 671902, "index_size": 1014, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8252, "raw_average_key_size": 20, "raw_value_size": 665286, "raw_average_value_size": 1614, "num_data_blocks": 44, "num_entries": 412, "num_filter_entries": 412, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003438, "oldest_key_time": 1760003438, "file_creation_time": 1760003491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 2583 microseconds, and 1908 cpu microseconds.
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.234776) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 675072 bytes OK
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.234787) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235083) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235093) EVENT_LOG_v1 {"time_micros": 1760003491235090, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235104) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1515203, prev total WAL file size 1515203, number of live WAL files 2.
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235709) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(659KB)], [30(14MB)]
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491235728, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 15919404, "oldest_snapshot_seqno": -1}
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4915 keys, 12159040 bytes, temperature: kUnknown
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491270875, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 12159040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12125407, "index_size": 20275, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 123535, "raw_average_key_size": 25, "raw_value_size": 12035280, "raw_average_value_size": 2448, "num_data_blocks": 847, "num_entries": 4915, "num_filter_entries": 4915, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760003491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.271098) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12159040 bytes
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.271491) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 452.3 rd, 345.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.5 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(41.6) write-amplify(18.0) OK, records in: 5407, records dropped: 492 output_compression: NoCompression
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.271509) EVENT_LOG_v1 {"time_micros": 1760003491271503, "job": 16, "event": "compaction_finished", "compaction_time_micros": 35199, "compaction_time_cpu_micros": 19138, "output_level": 6, "num_output_files": 1, "total_output_size": 12159040, "num_input_records": 5407, "num_output_records": 4915, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491271710, "job": 16, "event": "table_file_deletion", "file_number": 32}
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491273405, "job": 16, "event": "table_file_deletion", "file_number": 30}
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.273448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.273451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.273452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.273453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.273454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:51:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:31.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:32 compute-1 ceph-mon[9795]: pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:51:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:32.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:33.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:34.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:34 compute-1 ceph-mon[9795]: pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:51:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:35.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:35 compute-1 podman[163523]: 2025-10-09 09:51:35.535285585 +0000 UTC m=+0.043745394 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:51:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:36.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:36 compute-1 ceph-mon[9795]: pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:51:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:37.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:38.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:38 compute-1 ceph-mon[9795]: pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:39.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:40.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:40 compute-1 ceph-mon[9795]: pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:41.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:41 compute-1 sudo[163544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:51:41 compute-1 sudo[163544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:51:41 compute-1 sudo[163544]: pam_unix(sudo:session): session closed for user root
Oct 09 09:51:41 compute-1 sudo[163569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:51:41 compute-1 sudo[163569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:51:42 compute-1 sudo[163569]: pam_unix(sudo:session): session closed for user root
Oct 09 09:51:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:42.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:42 compute-1 ceph-mon[9795]: pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:51:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:43.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:43 compute-1 podman[163624]: 2025-10-09 09:51:43.533674413 +0000 UTC m=+0.042306615 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:51:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:44.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:44 compute-1 ceph-mon[9795]: pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:51:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:45.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:45 compute-1 ceph-mon[9795]: pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 09 09:51:45 compute-1 ceph-mon[9795]: pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.167 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.167 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.167 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.167 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.167 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.168 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.168 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.168 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.168 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.210 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.210 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.210 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.210 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.211 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:51:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:51:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:46.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:51:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:51:46 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3617190942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.605 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:51:46 compute-1 podman[163661]: 2025-10-09 09:51:46.620351914 +0000 UTC m=+0.125135066 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.842 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.844 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5414MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.844 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.845 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:51:46 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3617190942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:46 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2099201754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:46 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3423074867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:47.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.586 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.586 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.612 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:51:47 compute-1 sudo[163682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:51:47 compute-1 sudo[163682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:51:47 compute-1 sudo[163682]: pam_unix(sudo:session): session closed for user root
Oct 09 09:51:47 compute-1 ceph-mon[9795]: pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1 op/s
Oct 09 09:51:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3020292841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:51:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:51:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/337992853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:51:47 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2037733150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.955 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.961 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.984 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.985 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.985 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:51:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:48.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2037733150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:49.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:49 compute-1 ceph-mon[9795]: pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1 op/s
Oct 09 09:51:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:51:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:50.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:50 compute-1 sudo[163729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:51:50 compute-1 sudo[163729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:51:50 compute-1 sudo[163729]: pam_unix(sudo:session): session closed for user root
Oct 09 09:51:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:51:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:51.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:51:51 compute-1 ceph-mon[9795]: pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 718 B/s rd, 0 op/s
Oct 09 09:51:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:52.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:53.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:53 compute-1 ceph-mon[9795]: pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1 op/s
Oct 09 09:51:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:51:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:54.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:51:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/1875037942' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 09 09:51:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/4160620108' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 09 09:51:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:55.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:55 compute-1 ceph-mon[9795]: pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:51:55 compute-1 ceph-mon[9795]: from='client.24673 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 09 09:51:55 compute-1 ceph-mon[9795]: from='client.14982 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 09 09:51:55 compute-1 ceph-mon[9795]: from='client.14982 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 09 09:51:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:56.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:56 compute-1 podman[163757]: 2025-10-09 09:51:56.57813262 +0000 UTC m=+0.088025967 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 09:51:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:57.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:57 compute-1 ceph-mon[9795]: pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:51:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:58.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:51:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:59.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:59 compute-1 ceph-mon[9795]: pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:52:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:00.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:52:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:01.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:52:01 compute-1 ceph-mon[9795]: pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:52:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:02.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:03.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:03 compute-1 ceph-mon[9795]: pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:52:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:52:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:04.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:52:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:52:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:05.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:05 compute-1 ceph-mon[9795]: pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:06.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:06 compute-1 podman[163785]: 2025-10-09 09:52:06.554499124 +0000 UTC m=+0.066832657 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 09:52:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:07.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:08 compute-1 ceph-mon[9795]: pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:52:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:08.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:52:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:52:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:09.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:52:10 compute-1 ceph-mon[9795]: pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:52:10.029 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:52:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:52:10.029 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:52:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:52:10.030 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:52:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:10.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:10 compute-1 sudo[163804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:52:10 compute-1 sudo[163804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:52:10 compute-1 sudo[163804]: pam_unix(sudo:session): session closed for user root
Oct 09 09:52:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:52:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:11.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:52:12 compute-1 ceph-mon[9795]: pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/1258684450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:52:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/1258684450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:52:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:12.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:13.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:14 compute-1 ceph-mon[9795]: pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:14.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:14 compute-1 podman[163831]: 2025-10-09 09:52:14.523183336 +0000 UTC m=+0.035245826 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:52:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Oct 09 09:52:14 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2909389839' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 09 09:52:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Oct 09 09:52:14 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/247613377' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 09 09:52:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/2909389839' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 09 09:52:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/247613377' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 09 09:52:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:52:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:15.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:52:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:16 compute-1 ceph-mon[9795]: pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:16 compute-1 ceph-mon[9795]: from='client.24728 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 09 09:52:16 compute-1 ceph-mon[9795]: from='client.24731 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 09 09:52:16 compute-1 ceph-mon[9795]: from='client.24731 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 09 09:52:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:16.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:17 compute-1 podman[163849]: 2025-10-09 09:52:17.532870688 +0000 UTC m=+0.045263075 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Oct 09 09:52:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:17.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:18 compute-1 ceph-mon[9795]: pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:52:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:18.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:52:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:19.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:20 compute-1 ceph-mon[9795]: pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:52:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:20.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:21.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:22 compute-1 ceph-mon[9795]: pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:22.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:23.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:24 compute-1 ceph-mon[9795]: pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:52:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:24.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:52:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:25.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:52:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:26 compute-1 ceph-mon[9795]: pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:52:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:26.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:27 compute-1 podman[163871]: 2025-10-09 09:52:27.543309987 +0000 UTC m=+0.056005422 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 09 09:52:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:27.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:28 compute-1 ceph-mon[9795]: pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:52:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:28.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:29.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:30 compute-1 ceph-mon[9795]: pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:52:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:30.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:30 compute-1 sudo[163896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:52:30 compute-1 sudo[163896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:52:30 compute-1 sudo[163896]: pam_unix(sudo:session): session closed for user root
Oct 09 09:52:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:31.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:32 compute-1 ceph-mon[9795]: pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:52:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:52:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:32.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:52:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:52:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:52:34 compute-1 ceph-mon[9795]: pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:52:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:34.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:52:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:52:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:35.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:52:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:36 compute-1 ceph-mon[9795]: pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:36.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:37 compute-1 podman[163925]: 2025-10-09 09:52:37.531347645 +0000 UTC m=+0.042624362 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:52:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:37.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:38 compute-1 ceph-mon[9795]: pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:52:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:38.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:52:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:39.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:40 compute-1 ceph-mon[9795]: pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:40.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:41.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:42 compute-1 ceph-mon[9795]: pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:42.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:52:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:43.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:52:44 compute-1 ceph-mon[9795]: pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:44.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:45 compute-1 podman[163946]: 2025-10-09 09:52:45.525225929 +0000 UTC m=+0.038016088 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 09:52:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:45.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:46 compute-1 ceph-mon[9795]: pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:46.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:47.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:47 compute-1 sudo[163963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:52:47 compute-1 sudo[163963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:52:47 compute-1 sudo[163963]: pam_unix(sudo:session): session closed for user root
Oct 09 09:52:47 compute-1 sudo[163994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:52:47 compute-1 sudo[163994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:52:47 compute-1 podman[163987]: 2025-10-09 09:52:47.887074126 +0000 UTC m=+0.035558085 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251001)
Oct 09 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.979 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.980 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.991 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.991 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.991 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.991 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.991 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.992 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.113 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.113 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.127 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.127 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.128 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.140 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.140 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:52:48 compute-1 ceph-mon[9795]: pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/216110203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:48 compute-1 sudo[163994]: pam_unix(sudo:session): session closed for user root
Oct 09 09:52:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:48.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:52:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/942557203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.482 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.656 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.657 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5411MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.657 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.657 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.701 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.701 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.715 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:52:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:52:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3099014491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:49 compute-1 nova_compute[162974]: 2025-10-09 09:52:49.046 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:52:49 compute-1 nova_compute[162974]: 2025-10-09 09:52:49.049 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:52:49 compute-1 nova_compute[162974]: 2025-10-09 09:52:49.062 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:52:49 compute-1 nova_compute[162974]: 2025-10-09 09:52:49.063 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:52:49 compute-1 nova_compute[162974]: 2025-10-09 09:52:49.063 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:52:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:52:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:52:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:52:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:52:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:52:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:52:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:52:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3056215677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/942557203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2467923929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3099014491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:52:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:49.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:52:50 compute-1 ceph-mon[9795]: pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 09:52:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3760170036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:52:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:50.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:50 compute-1 sudo[164104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:52:50 compute-1 sudo[164104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:52:50 compute-1 sudo[164104]: pam_unix(sudo:session): session closed for user root
Oct 09 09:52:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:51.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:51 compute-1 sudo[164130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:52:51 compute-1 sudo[164130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:52:51 compute-1 sudo[164130]: pam_unix(sudo:session): session closed for user root
Oct 09 09:52:52 compute-1 ceph-mon[9795]: pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 09:52:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:52:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:52:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:52.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:53.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:54 compute-1 ceph-mon[9795]: pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:54.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:55.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:56 compute-1 ceph-mon[9795]: pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 09:52:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:56.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:52:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:57.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:52:58 compute-1 ceph-mon[9795]: pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:58.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:58 compute-1 podman[164158]: 2025-10-09 09:52:58.551323722 +0000 UTC m=+0.062495563 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 09 09:52:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:52:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:59.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:00 compute-1 ceph-mon[9795]: pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 09:53:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:00.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:01 compute-1 ceph-mon[9795]: pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:01.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:02 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:53:02.047 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:53:02 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:53:02.047 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 09:53:02 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:53:02.048 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:53:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:02.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:03 compute-1 ceph-mon[9795]: pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:03.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:04.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:05 compute-1 ceph-mon[9795]: pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:53:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:53:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:05.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:53:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:06.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:07 compute-1 ceph-mon[9795]: pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:07.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:08.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:08 compute-1 podman[164189]: 2025-10-09 09:53:08.52649488 +0000 UTC m=+0.038667827 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 09:53:09 compute-1 ceph-mon[9795]: pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:53:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:09.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:53:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:53:10.029 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:53:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:53:10.030 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:53:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:53:10.030 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:53:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:10.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:10 compute-1 sudo[164207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:53:11 compute-1 sudo[164207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:53:11 compute-1 sudo[164207]: pam_unix(sudo:session): session closed for user root
Oct 09 09:53:11 compute-1 ceph-mon[9795]: pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:53:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:11.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:53:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:12.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/208649628' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:53:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/208649628' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:53:13 compute-1 ceph-mon[9795]: pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:13.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:53:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:14.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:53:15 compute-1 ceph-mon[9795]: pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:15.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.262014) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596262034, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1313, "num_deletes": 256, "total_data_size": 3197999, "memory_usage": 3247496, "flush_reason": "Manual Compaction"}
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596267670, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 2067926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18587, "largest_seqno": 19895, "table_properties": {"data_size": 2062313, "index_size": 2944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 11873, "raw_average_key_size": 18, "raw_value_size": 2050859, "raw_average_value_size": 3270, "num_data_blocks": 132, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003492, "oldest_key_time": 1760003492, "file_creation_time": 1760003596, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5701 microseconds, and 4094 cpu microseconds.
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.267708) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 2067926 bytes OK
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.267725) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268080) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268091) EVENT_LOG_v1 {"time_micros": 1760003596268088, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268100) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3191684, prev total WAL file size 3191684, number of live WAL files 2.
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268627) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(2019KB)], [33(11MB)]
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596268656, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 14226966, "oldest_snapshot_seqno": -1}
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5016 keys, 13755117 bytes, temperature: kUnknown
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596306599, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 13755117, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13719940, "index_size": 21563, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126919, "raw_average_key_size": 25, "raw_value_size": 13626990, "raw_average_value_size": 2716, "num_data_blocks": 890, "num_entries": 5016, "num_filter_entries": 5016, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760003596, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.306767) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13755117 bytes
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.307211) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 374.6 rd, 362.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.6 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(13.5) write-amplify(6.7) OK, records in: 5542, records dropped: 526 output_compression: NoCompression
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.307223) EVENT_LOG_v1 {"time_micros": 1760003596307218, "job": 18, "event": "compaction_finished", "compaction_time_micros": 37980, "compaction_time_cpu_micros": 18793, "output_level": 6, "num_output_files": 1, "total_output_size": 13755117, "num_input_records": 5542, "num_output_records": 5016, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596307482, "job": 18, "event": "table_file_deletion", "file_number": 35}
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596309063, "job": 18, "event": "table_file_deletion", "file_number": 33}
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:53:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:16.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:16 compute-1 podman[164235]: 2025-10-09 09:53:16.523141932 +0000 UTC m=+0.036110256 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true)
Oct 09 09:53:17 compute-1 ceph-mon[9795]: pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:53:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:17.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:18.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:18 compute-1 podman[164252]: 2025-10-09 09:53:18.558255381 +0000 UTC m=+0.068695868 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 09 09:53:19 compute-1 ceph-mon[9795]: pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:53:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:19.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:53:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:53:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:20.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:53:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:21 compute-1 ceph-mon[9795]: pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:53:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:21.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:22.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:23 compute-1 ceph-mon[9795]: pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:53:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:23.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:24.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:25 compute-1 ceph-mon[9795]: pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:53:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:25.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:26.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:27 compute-1 ceph-mon[9795]: pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:53:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:27.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:28.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:29 compute-1 ceph-mon[9795]: pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:29 compute-1 podman[164274]: 2025-10-09 09:53:29.541598284 +0000 UTC m=+0.053030415 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:53:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:29.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:30.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:31 compute-1 sudo[164297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:53:31 compute-1 sudo[164297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:53:31 compute-1 sudo[164297]: pam_unix(sudo:session): session closed for user root
Oct 09 09:53:31 compute-1 ceph-mon[9795]: pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:31.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:32.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:33 compute-1 ceph-mon[9795]: pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:33.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:53:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:34.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:53:35 compute-1 ceph-mon[9795]: pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:53:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:35.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:36.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:37 compute-1 ceph-mon[9795]: pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:37.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:38.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:39 compute-1 podman[164327]: 2025-10-09 09:53:39.524185321 +0000 UTC m=+0.037441555 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:53:39 compute-1 ceph-mon[9795]: pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:39.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:40.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:41 compute-1 ceph-mon[9795]: pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:41.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:42.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:43 compute-1 ceph-mon[9795]: pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:43.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:44.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:45 compute-1 ceph-mon[9795]: pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:45.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:46.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:47 compute-1 podman[164348]: 2025-10-09 09:53:47.523304351 +0000 UTC m=+0.036366059 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 09 09:53:47 compute-1 ceph-mon[9795]: pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:53:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:47.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.050 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.050 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.050 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.129 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.129 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.129 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.155 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.155 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.155 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.155 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.156 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:53:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:53:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1414105849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:53:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:48.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.490 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:53:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1414105849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.668 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.669 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5389MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.669 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.669 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.725 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.725 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.745 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:53:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:53:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1280325955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:49 compute-1 nova_compute[162974]: 2025-10-09 09:53:49.078 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:53:49 compute-1 nova_compute[162974]: 2025-10-09 09:53:49.081 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:53:49 compute-1 nova_compute[162974]: 2025-10-09 09:53:49.094 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:53:49 compute-1 nova_compute[162974]: 2025-10-09 09:53:49.095 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:53:49 compute-1 nova_compute[162974]: 2025-10-09 09:53:49.095 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:53:49 compute-1 podman[164409]: 2025-10-09 09:53:49.526199583 +0000 UTC m=+0.036260990 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3)
Oct 09 09:53:49 compute-1 ceph-mon[9795]: pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4016438260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1280325955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/325359725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3354045688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:53:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:49.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:50 compute-1 nova_compute[162974]: 2025-10-09 09:53:50.081 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:50 compute-1 nova_compute[162974]: 2025-10-09 09:53:50.081 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:50 compute-1 nova_compute[162974]: 2025-10-09 09:53:50.081 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:50 compute-1 nova_compute[162974]: 2025-10-09 09:53:50.082 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:53:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:50.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1008308126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:51 compute-1 sudo[164426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:53:51 compute-1 sudo[164426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:53:51 compute-1 sudo[164426]: pam_unix(sudo:session): session closed for user root
Oct 09 09:53:51 compute-1 ceph-mon[9795]: pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:51.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:51 compute-1 sudo[164452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:53:51 compute-1 sudo[164452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:53:51 compute-1 sudo[164452]: pam_unix(sudo:session): session closed for user root
Oct 09 09:53:51 compute-1 sudo[164477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:53:51 compute-1 sudo[164477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:53:52 compute-1 sudo[164477]: pam_unix(sudo:session): session closed for user root
Oct 09 09:53:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.002000022s ======
Oct 09 09:53:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:52.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000022s
Oct 09 09:53:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:53:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:53:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:53:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:53:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:53:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:53:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:53:53 compute-1 ceph-mon[9795]: pgmap v611: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:53.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:53:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:54.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:53:55 compute-1 ceph-mon[9795]: pgmap v612: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 09:53:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:55.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:55 compute-1 sudo[164533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:53:55 compute-1 sudo[164533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:53:55 compute-1 sudo[164533]: pam_unix(sudo:session): session closed for user root
Oct 09 09:53:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:56.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:53:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:53:57 compute-1 ceph-mon[9795]: pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:57.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:58.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:59 compute-1 ceph-mon[9795]: pgmap v614: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 09:53:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:53:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:59.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:00.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:00 compute-1 podman[164560]: 2025-10-09 09:54:00.567324662 +0000 UTC m=+0.067409308 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 09 09:54:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:01.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:01 compute-1 ceph-mon[9795]: pgmap v615: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 09:54:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:02.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:03.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:03 compute-1 ceph-mon[9795]: pgmap v616: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:04.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:54:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:05.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:05 compute-1 ceph-mon[9795]: pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:06.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:07.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:07 compute-1 ceph-mon[9795]: pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:08.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:09.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:09 compute-1 ceph-mon[9795]: pgmap v619: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.713767) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649713813, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 789, "num_deletes": 251, "total_data_size": 1568977, "memory_usage": 1594880, "flush_reason": "Manual Compaction"}
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649717393, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1032670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19900, "largest_seqno": 20684, "table_properties": {"data_size": 1028915, "index_size": 1535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8578, "raw_average_key_size": 19, "raw_value_size": 1021376, "raw_average_value_size": 2316, "num_data_blocks": 68, "num_entries": 441, "num_filter_entries": 441, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003597, "oldest_key_time": 1760003597, "file_creation_time": 1760003649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 3642 microseconds, and 2805 cpu microseconds.
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.717415) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1032670 bytes OK
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.717426) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.717993) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.718005) EVENT_LOG_v1 {"time_micros": 1760003649718001, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.718015) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1564823, prev total WAL file size 1564823, number of live WAL files 2.
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.718379) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1008KB)], [36(13MB)]
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649718400, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 14787787, "oldest_snapshot_seqno": -1}
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4941 keys, 12621603 bytes, temperature: kUnknown
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649754636, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 12621603, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12587855, "index_size": 20262, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 125985, "raw_average_key_size": 25, "raw_value_size": 12497083, "raw_average_value_size": 2529, "num_data_blocks": 833, "num_entries": 4941, "num_filter_entries": 4941, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760003649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.754916) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12621603 bytes
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.755290) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 406.0 rd, 346.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.1 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(26.5) write-amplify(12.2) OK, records in: 5457, records dropped: 516 output_compression: NoCompression
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.755303) EVENT_LOG_v1 {"time_micros": 1760003649755297, "job": 20, "event": "compaction_finished", "compaction_time_micros": 36422, "compaction_time_cpu_micros": 19733, "output_level": 6, "num_output_files": 1, "total_output_size": 12621603, "num_input_records": 5457, "num_output_records": 4941, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649755872, "job": 20, "event": "table_file_deletion", "file_number": 38}
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649757885, "job": 20, "event": "table_file_deletion", "file_number": 36}
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.718347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.758011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.758016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.758018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.758019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.758020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:54:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:54:10.031 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:54:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:54:10.031 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:54:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:54:10.031 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:54:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:10.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:10 compute-1 podman[164588]: 2025-10-09 09:54:10.543310356 +0000 UTC m=+0.045256437 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:54:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:11 compute-1 sudo[164606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:54:11 compute-1 sudo[164606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:54:11 compute-1 sudo[164606]: pam_unix(sudo:session): session closed for user root
Oct 09 09:54:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:11.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:11 compute-1 ceph-mon[9795]: pgmap v620: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:12.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:13.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:13 compute-1 ceph-mon[9795]: pgmap v621: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:14.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:15.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:15 compute-1 ceph-mon[9795]: pgmap v622: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:16.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:17.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:17 compute-1 ceph-mon[9795]: pgmap v623: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:18.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:18 compute-1 podman[164635]: 2025-10-09 09:54:18.541251536 +0000 UTC m=+0.042355586 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 09 09:54:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:19.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:19 compute-1 ceph-mon[9795]: pgmap v624: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:54:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:20.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:20 compute-1 podman[164653]: 2025-10-09 09:54:20.532683379 +0000 UTC m=+0.036890507 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 09 09:54:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:21.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:21 compute-1 ceph-mon[9795]: pgmap v625: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:22.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:23.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:23 compute-1 ceph-mon[9795]: pgmap v626: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:24.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:25.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:25 compute-1 ceph-mon[9795]: pgmap v627: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:26.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:27.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:27 compute-1 ceph-mon[9795]: pgmap v628: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:28.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - - [09/Oct/2025:09:54:28.650 +0000] "GET /swift/info HTTP/1.1" 200 539 - "python-urllib3/1.26.5" - latency=0.000000000s
Oct 09 09:54:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:29.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:29 compute-1 ceph-mon[9795]: pgmap v629: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:30 compute-1 PackageKit[96939]: daemon quit
Oct 09 09:54:30 compute-1 systemd[1]: packagekit.service: Deactivated successfully.
Oct 09 09:54:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:30.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:31 compute-1 sudo[164675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:54:31 compute-1 sudo[164675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:54:31 compute-1 sudo[164675]: pam_unix(sudo:session): session closed for user root
Oct 09 09:54:31 compute-1 podman[164699]: 2025-10-09 09:54:31.327253001 +0000 UTC m=+0.060011194 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 09 09:54:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:31.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:31 compute-1 ceph-mon[9795]: pgmap v630: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:32.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:32 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Oct 09 09:54:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:33.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:33 compute-1 ceph-mon[9795]: pgmap v631: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:33 compute-1 ceph-mon[9795]: osdmap e134: 3 total, 3 up, 3 in
Oct 09 09:54:33 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Oct 09 09:54:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:34.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:34 compute-1 ceph-mon[9795]: osdmap e135: 3 total, 3 up, 3 in
Oct 09 09:54:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:54:34 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Oct 09 09:54:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:35.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:35 compute-1 ceph-mon[9795]: pgmap v634: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Oct 09 09:54:35 compute-1 ceph-mon[9795]: osdmap e136: 3 total, 3 up, 3 in
Oct 09 09:54:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Oct 09 09:54:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:36.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:36 compute-1 ceph-mon[9795]: osdmap e137: 3 total, 3 up, 3 in
Oct 09 09:54:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:37.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:37 compute-1 ceph-mon[9795]: pgmap v637: 337 pgs: 337 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 5.1 MiB/s wr, 68 op/s
Oct 09 09:54:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:38.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:39.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:39 compute-1 ceph-mon[9795]: pgmap v638: 337 pgs: 337 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.7 MiB/s wr, 50 op/s
Oct 09 09:54:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:40.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:41 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Oct 09 09:54:41 compute-1 podman[164729]: 2025-10-09 09:54:41.534293165 +0000 UTC m=+0.040872609 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 09:54:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:41.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:41 compute-1 ceph-mon[9795]: pgmap v639: 337 pgs: 337 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.2 MiB/s wr, 42 op/s
Oct 09 09:54:41 compute-1 ceph-mon[9795]: osdmap e138: 3 total, 3 up, 3 in
Oct 09 09:54:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:42.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:43.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:43 compute-1 ceph-mon[9795]: pgmap v641: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 5.5 MiB/s wr, 52 op/s
Oct 09 09:54:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:44.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:45.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:45 compute-1 ceph-mon[9795]: pgmap v642: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 2.4 MiB/s wr, 14 op/s
Oct 09 09:54:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:46.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:47 compute-1 nova_compute[162974]: 2025-10-09 09:54:47.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:47 compute-1 nova_compute[162974]: 2025-10-09 09:54:47.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:47.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:47 compute-1 ceph-mon[9795]: pgmap v643: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Oct 09 09:54:48 compute-1 nova_compute[162974]: 2025-10-09 09:54:48.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:48.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4012396778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.185 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.185 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.185 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.185 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.185 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:54:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:54:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882083905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:54:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3874590807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:49 compute-1 podman[164770]: 2025-10-09 09:54:49.53225795 +0000 UTC m=+0.042135671 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.534 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.727 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.728 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5430MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.728 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.729 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:54:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:49.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.803 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.804 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.828 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:54:49 compute-1 ceph-mon[9795]: pgmap v644: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Oct 09 09:54:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1882083905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3874590807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3185983185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:54:50 compute-1 nova_compute[162974]: 2025-10-09 09:54:50.167 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:54:50 compute-1 nova_compute[162974]: 2025-10-09 09:54:50.170 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:54:50 compute-1 nova_compute[162974]: 2025-10-09 09:54:50.185 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:54:50 compute-1 nova_compute[162974]: 2025-10-09 09:54:50.186 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:54:50 compute-1 nova_compute[162974]: 2025-10-09 09:54:50.187 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:54:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:50.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/4109511036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1194021554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:51 compute-1 nova_compute[162974]: 2025-10-09 09:54:51.186 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:51 compute-1 nova_compute[162974]: 2025-10-09 09:54:51.187 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:54:51 compute-1 nova_compute[162974]: 2025-10-09 09:54:51.187 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:54:51 compute-1 nova_compute[162974]: 2025-10-09 09:54:51.199 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 09:54:51 compute-1 nova_compute[162974]: 2025-10-09 09:54:51.199 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:51 compute-1 sudo[164810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:54:51 compute-1 sudo[164810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:54:51 compute-1 sudo[164810]: pam_unix(sudo:session): session closed for user root
Oct 09 09:54:51 compute-1 podman[164834]: 2025-10-09 09:54:51.392502629 +0000 UTC m=+0.069863076 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible)
Oct 09 09:54:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 09:54:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Cumulative writes: 9273 writes, 35K keys, 9273 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                          Cumulative WAL: 9273 writes, 2281 syncs, 4.07 writes per sync, written: 0.02 GB, 0.02 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 860 writes, 1592 keys, 860 commit groups, 1.0 writes per commit group, ingest: 0.67 MB, 0.00 MB/s
                                          Interval WAL: 860 writes, 406 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
                                          
                                          ** Compaction Stats [m-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-0] **
                                          
                                          ** Compaction Stats [m-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-1] **
                                          
                                          ** Compaction Stats [m-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-2] **
                                          
                                          ** Compaction Stats [p-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-0] **
                                          
                                          ** Compaction Stats [p-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-1] **
                                          
                                          ** Compaction Stats [p-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-2] **
                                          
                                          ** Compaction Stats [O-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-0] **
                                          
                                          ** Compaction Stats [O-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-1] **
                                          
                                          ** Compaction Stats [O-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-2] **
                                          
                                          ** Compaction Stats [L] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [L] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [L] **
                                          
                                          ** Compaction Stats [P] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [P] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [P] **
Oct 09 09:54:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:51.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:51 compute-1 ceph-mon[9795]: pgmap v645: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Oct 09 09:54:52 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:54:52.051 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:54:52 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:54:52.052 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 09:54:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:52.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:53.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:53 compute-1 ceph-mon[9795]: pgmap v646: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:54:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:54.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:55.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:55 compute-1 sudo[164855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:54:55 compute-1 sudo[164855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:54:55 compute-1 sudo[164855]: pam_unix(sudo:session): session closed for user root
Oct 09 09:54:55 compute-1 sudo[164880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:54:55 compute-1 sudo[164880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:54:55 compute-1 ceph-mon[9795]: pgmap v647: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:56 compute-1 sudo[164880]: pam_unix(sudo:session): session closed for user root
Oct 09 09:54:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:56.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:54:57.054 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:54:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:57.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:58 compute-1 ceph-mon[9795]: pgmap v648: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:58 compute-1 ceph-mon[9795]: pgmap v649: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:54:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:58.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:54:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:59.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:00 compute-1 ceph-mon[9795]: pgmap v650: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:55:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:00.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:00 compute-1 sudo[164936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:55:00 compute-1 sudo[164936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:55:00 compute-1 sudo[164936]: pam_unix(sudo:session): session closed for user root
Oct 09 09:55:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:55:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:55:01 compute-1 ceph-mon[9795]: pgmap v651: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:55:01 compute-1 podman[164962]: 2025-10-09 09:55:01.573314443 +0000 UTC m=+0.083667140 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 09 09:55:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:01.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:02.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:03.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:03 compute-1 ceph-mon[9795]: pgmap v652: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:55:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:04.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:55:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:05.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:05 compute-1 ceph-mon[9795]: pgmap v653: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:55:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:06.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:07.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:07 compute-1 ceph-mon[9795]: pgmap v654: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:55:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:08.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:09.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:09 compute-1 ceph-mon[9795]: pgmap v655: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:55:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:10.033 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:10.033 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:10.033 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:10.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:11 compute-1 sudo[164989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:55:11 compute-1 sudo[164989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:55:11 compute-1 sudo[164989]: pam_unix(sudo:session): session closed for user root
Oct 09 09:55:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:11.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:11 compute-1 ceph-mon[9795]: pgmap v656: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:55:11 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/3577999995' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:55:11 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/3577999995' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:55:12 compute-1 podman[165015]: 2025-10-09 09:55:12.525228165 +0000 UTC m=+0.036430240 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 09 09:55:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:12.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1002258365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:13.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:13 compute-1 ceph-mon[9795]: pgmap v657: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:55:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:14.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Oct 09 09:55:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 09:55:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Cumulative writes: 3895 writes, 21K keys, 3895 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s
                                          Cumulative WAL: 3895 writes, 3895 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 1462 writes, 7096 keys, 1462 commit groups, 1.0 writes per commit group, ingest: 16.83 MB, 0.03 MB/s
                                          Interval WAL: 1462 writes, 1462 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    449.0      0.07              0.05        10    0.007       0      0       0.0       0.0
                                            L6      1/0   12.04 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    438.6    372.1      0.31              0.17         9    0.034     42K   4797       0.0       0.0
                                           Sum      1/0   12.04 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5    355.2    386.8      0.38              0.22        19    0.020     42K   4797       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.5    344.8    350.5      0.18              0.10         8    0.022     22K   2557       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    438.6    372.1      0.31              0.17         9    0.034     42K   4797       0.0       0.0
                                          High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    453.8      0.07              0.05         9    0.008       0      0       0.0       0.0
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.032, interval 0.011
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.14 GB write, 0.12 MB/s write, 0.13 GB read, 0.11 MB/s read, 0.4 seconds
                                          Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.2 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x55e4b55c29b0#2 capacity: 304.00 MB usage: 8.17 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.9e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(485,7.81 MB,2.57052%) FilterBlock(19,127.80 KB,0.0410532%) IndexBlock(19,240.41 KB,0.0772275%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
Oct 09 09:55:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:15.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Oct 09 09:55:15 compute-1 ceph-mon[9795]: pgmap v658: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:55:15 compute-1 ceph-mon[9795]: osdmap e139: 3 total, 3 up, 3 in
Oct 09 09:55:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:16.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:16 compute-1 ceph-mon[9795]: osdmap e140: 3 total, 3 up, 3 in
Oct 09 09:55:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:17.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:17 compute-1 ceph-mon[9795]: pgmap v661: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 11 op/s
Oct 09 09:55:17 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1547011463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:55:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:18.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:18 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/980579080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:55:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:19.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:19 compute-1 ceph-mon[9795]: pgmap v662: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Oct 09 09:55:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:55:20 compute-1 podman[165036]: 2025-10-09 09:55:20.529249983 +0000 UTC m=+0.038949712 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 09 09:55:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:20.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:21 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Oct 09 09:55:21 compute-1 podman[165054]: 2025-10-09 09:55:21.525969183 +0000 UTC m=+0.037218658 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd)
Oct 09 09:55:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:21.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:21 compute-1 ceph-mon[9795]: pgmap v663: 337 pgs: 337 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 64 op/s
Oct 09 09:55:21 compute-1 ceph-mon[9795]: osdmap e141: 3 total, 3 up, 3 in
Oct 09 09:55:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:22.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:23.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:23 compute-1 ceph-mon[9795]: pgmap v665: 337 pgs: 337 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 65 op/s
Oct 09 09:55:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:24.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:25.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:25 compute-1 ceph-mon[9795]: pgmap v666: 337 pgs: 337 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 MiB/s wr, 47 op/s
Oct 09 09:55:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:26.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:27.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:27 compute-1 ceph-mon[9795]: pgmap v667: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 09 09:55:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:28.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:29.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:30 compute-1 ceph-mon[9795]: pgmap v668: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 09 09:55:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:30.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:31 compute-1 sudo[165076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:55:31 compute-1 sudo[165076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:55:31 compute-1 sudo[165076]: pam_unix(sudo:session): session closed for user root
Oct 09 09:55:31 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 09 09:55:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:31.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:32 compute-1 ceph-mon[9795]: pgmap v669: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 80 op/s
Oct 09 09:55:32 compute-1 podman[165102]: 2025-10-09 09:55:32.548265166 +0000 UTC m=+0.060499997 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 09 09:55:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:32.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:33.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:34 compute-1 ceph-mon[9795]: pgmap v670: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 69 op/s
Oct 09 09:55:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:34.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:55:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:35.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:36 compute-1 ceph-mon[9795]: pgmap v671: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Oct 09 09:55:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:36.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:37.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:38 compute-1 ceph-mon[9795]: pgmap v672: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 09 09:55:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:38.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:39.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:40 compute-1 ceph-mon[9795]: pgmap v673: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 09:55:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:40.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:41.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:42 compute-1 ceph-mon[9795]: pgmap v674: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 09 09:55:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:42.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:43 compute-1 podman[165131]: 2025-10-09 09:55:43.52441749 +0000 UTC m=+0.036137318 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 09 09:55:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:43.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:44 compute-1 ceph-mon[9795]: pgmap v675: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 09 09:55:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:44.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:45.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:46 compute-1 ceph-mon[9795]: pgmap v676: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 09 09:55:46 compute-1 nova_compute[162974]: 2025-10-09 09:55:46.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:46 compute-1 nova_compute[162974]: 2025-10-09 09:55:46.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 09 09:55:46 compute-1 nova_compute[162974]: 2025-10-09 09:55:46.128 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 09 09:55:46 compute-1 nova_compute[162974]: 2025-10-09 09:55:46.129 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:46 compute-1 nova_compute[162974]: 2025-10-09 09:55:46.129 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 09 09:55:46 compute-1 nova_compute[162974]: 2025-10-09 09:55:46.136 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:46.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:47.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:48 compute-1 ceph-mon[9795]: pgmap v677: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 09:55:48 compute-1 nova_compute[162974]: 2025-10-09 09:55:48.138 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:48.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.035 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.036 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.047 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.131 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.131 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.141 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.142 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.146 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.147 2 INFO nova.compute.claims [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Claim successful on node compute-1.ctlplane.example.com
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.206 2 DEBUG nova.scheduler.client.report [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Refreshing inventories for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.258 2 DEBUG nova.scheduler.client.report [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Updating ProviderTree inventory for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.258 2 DEBUG nova.compute.provider_tree [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Updating inventory in ProviderTree for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.272 2 DEBUG nova.scheduler.client.report [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Refreshing aggregate associations for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.289 2 DEBUG nova.scheduler.client.report [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Refreshing trait associations for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a, traits: HW_CPU_X86_AESNI,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX512VAES,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.310 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:55:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:55:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2405746203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.468 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.654 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.655 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5370MB free_disk=59.94271469116211GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.655 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:55:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/292366459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.689 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.693 2 DEBUG nova.compute.provider_tree [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.704 2 DEBUG nova.scheduler.client.report [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.715 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.716 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.718 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.758 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.759 2 DEBUG nova.network.neutron [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.793 2 INFO nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.809 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Instance 27831bd3-a756-4807-b9da-7be12d549265 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.809 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.809 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.812 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 09 09:55:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:49.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.849 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.878 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.879 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.880 2 INFO nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Creating image(s)
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.899 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.916 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.935 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.937 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.937 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:50 compute-1 ceph-mon[9795]: pgmap v678: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 09 09:55:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2405746203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1559129444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:55:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/292366459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.192 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.196 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.212 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.224 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.224 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.259 2 DEBUG nova.virt.libvirt.imagebackend [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image locations are: [{'url': 'rbd://286f8bf0-da72-5823-9a4e-ac4457d9e609/images/9546778e-959c-466e-9bef-81ace5bd1cc5/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://286f8bf0-da72-5823-9a4e-ac4457d9e609/images/9546778e-959c-466e-9bef-81ace5bd1cc5/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 09 09:55:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:55:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:50.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:55:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.766 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.812 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.813 2 DEBUG nova.virt.images [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] 9546778e-959c-466e-9bef-81ace5bd1cc5 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.814 2 DEBUG nova.privsep.utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.814 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.869 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.872 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.916 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted --force-share --output=json" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.917 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.933 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.935 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 27831bd3-a756-4807-b9da-7be12d549265_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.956 2 WARNING oslo_policy.policy [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.956 2 WARNING oslo_policy.policy [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 09 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.958 2 DEBUG nova.policy [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 09 09:55:51 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2594520052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:51 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/576738267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:51 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3508635897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.100 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 27831bd3-a756-4807-b9da-7be12d549265_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.140 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.193 2 DEBUG nova.objects.instance [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid 27831bd3-a756-4807-b9da-7be12d549265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.203 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.203 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Ensure instance console log exists: /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.204 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.204 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.204 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.225 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.225 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.225 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.233 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.233 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.234 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.234 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.234 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.234 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.235 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:55:51 compute-1 sudo[165395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:55:51 compute-1 sudo[165395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:55:51 compute-1 sudo[165395]: pam_unix(sudo:session): session closed for user root
Oct 09 09:55:51 compute-1 podman[165394]: 2025-10-09 09:55:51.532122718 +0000 UTC m=+0.039652588 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 09:55:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:55:51 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3208036889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:51 compute-1 podman[165435]: 2025-10-09 09:55:51.591262679 +0000 UTC m=+0.040856940 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 09:55:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:51.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:52 compute-1 nova_compute[162974]: 2025-10-09 09:55:52.054 2 DEBUG nova.network.neutron [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Successfully created port: 89605073-2c16-4e83-a34b-96c0ad203677 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 09 09:55:52 compute-1 ceph-mon[9795]: pgmap v679: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 16 KiB/s wr, 9 op/s
Oct 09 09:55:52 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3208036889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:52 compute-1 nova_compute[162974]: 2025-10-09 09:55:52.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:52.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:53 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:53.198 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:55:53 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:53.199 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.432 2 DEBUG nova.network.neutron [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Successfully updated port: 89605073-2c16-4e83-a34b-96c0ad203677 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 09 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.443 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-27831bd3-a756-4807-b9da-7be12d549265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.444 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-27831bd3-a756-4807-b9da-7be12d549265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.444 2 DEBUG nova.network.neutron [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 09 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.502 2 DEBUG nova.compute.manager [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received event network-changed-89605073-2c16-4e83-a34b-96c0ad203677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.502 2 DEBUG nova.compute.manager [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Refreshing instance network info cache due to event network-changed-89605073-2c16-4e83-a34b-96c0ad203677. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.502 2 DEBUG oslo_concurrency.lockutils [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-27831bd3-a756-4807-b9da-7be12d549265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.565 2 DEBUG nova.network.neutron [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 09 09:55:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:53.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.063 2 DEBUG nova.network.neutron [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Updating instance_info_cache with network_info: [{"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.075 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-27831bd3-a756-4807-b9da-7be12d549265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.075 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Instance network_info: |[{"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.076 2 DEBUG oslo_concurrency.lockutils [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-27831bd3-a756-4807-b9da-7be12d549265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.076 2 DEBUG nova.network.neutron [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Refreshing network info cache for port 89605073-2c16-4e83-a34b-96c0ad203677 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.078 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Start _get_guest_xml network_info=[{"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.081 2 WARNING nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.087 2 DEBUG nova.virt.libvirt.host [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.087 2 DEBUG nova.virt.libvirt.host [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.089 2 DEBUG nova.virt.libvirt.host [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.090 2 DEBUG nova.virt.libvirt.host [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.090 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.090 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.091 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.091 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.091 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.091 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.092 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.092 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.092 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.092 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.092 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.092 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.096 2 DEBUG nova.privsep.utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.096 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:55:54 compute-1 ceph-mon[9795]: pgmap v680: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 KiB/s wr, 8 op/s
Oct 09 09:55:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 09:55:54 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3156609996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.437 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.454 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.456 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:55:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:54.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 09:55:54 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2397916184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.794 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.796 2 DEBUG nova.virt.libvirt.vif [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-452258331',display_name='tempest-TestNetworkBasicOps-server-452258331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-452258331',id=2,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBODLXk0rzffmHcNbUDYGLfUDc9LvP6gD0Cl2kTpN0VYCCLdQjTmH7i6AAWYqub8jT4Jlgu+DRbDcF0CjszX7mILwKGtZArFBrJ9e1Ud75exDORK7fEHNnUEihiwx6WpTPg==',key_name='tempest-TestNetworkBasicOps-68447822',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-g89vp4u8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:55:49Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=27831bd3-a756-4807-b9da-7be12d549265,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.796 2 DEBUG nova.network.os_vif_util [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.797 2 DEBUG nova.network.os_vif_util [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:82:c8,bridge_name='br-int',has_traffic_filtering=True,id=89605073-2c16-4e83-a34b-96c0ad203677,network=Network(ca25ffbc-c518-421a-acbc-33327ba74e5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89605073-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.799 2 DEBUG nova.objects.instance [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid 27831bd3-a756-4807-b9da-7be12d549265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.809 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] End _get_guest_xml xml=<domain type="kvm">
Oct 09 09:55:54 compute-1 nova_compute[162974]:   <uuid>27831bd3-a756-4807-b9da-7be12d549265</uuid>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   <name>instance-00000002</name>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   <memory>131072</memory>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   <vcpu>1</vcpu>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   <metadata>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <nova:name>tempest-TestNetworkBasicOps-server-452258331</nova:name>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <nova:creationTime>2025-10-09 09:55:54</nova:creationTime>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <nova:flavor name="m1.nano">
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <nova:memory>128</nova:memory>
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <nova:disk>1</nova:disk>
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <nova:swap>0</nova:swap>
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <nova:vcpus>1</nova:vcpus>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       </nova:flavor>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <nova:owner>
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       </nova:owner>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <nova:ports>
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <nova:port uuid="89605073-2c16-4e83-a34b-96c0ad203677">
Oct 09 09:55:54 compute-1 nova_compute[162974]:           <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:         </nova:port>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       </nova:ports>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     </nova:instance>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   </metadata>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   <sysinfo type="smbios">
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <system>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <entry name="manufacturer">RDO</entry>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <entry name="product">OpenStack Compute</entry>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <entry name="serial">27831bd3-a756-4807-b9da-7be12d549265</entry>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <entry name="uuid">27831bd3-a756-4807-b9da-7be12d549265</entry>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <entry name="family">Virtual Machine</entry>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     </system>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   </sysinfo>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   <os>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <boot dev="hd"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <smbios mode="sysinfo"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   </os>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   <features>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <acpi/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <apic/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <vmcoreinfo/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   </features>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   <clock offset="utc">
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <timer name="hpet" present="no"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   </clock>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   <cpu mode="host-model" match="exact">
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   </cpu>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   <devices>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <disk type="network" device="disk">
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <driver type="raw" cache="none"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <source protocol="rbd" name="vms/27831bd3-a756-4807-b9da-7be12d549265_disk">
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <host name="192.168.122.100" port="6789"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <host name="192.168.122.102" port="6789"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <host name="192.168.122.101" port="6789"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       </source>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <auth username="openstack">
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       </auth>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <target dev="vda" bus="virtio"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <disk type="network" device="cdrom">
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <driver type="raw" cache="none"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <source protocol="rbd" name="vms/27831bd3-a756-4807-b9da-7be12d549265_disk.config">
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <host name="192.168.122.100" port="6789"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <host name="192.168.122.102" port="6789"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <host name="192.168.122.101" port="6789"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       </source>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <auth username="openstack">
Oct 09 09:55:54 compute-1 nova_compute[162974]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       </auth>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <target dev="sda" bus="sata"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <interface type="ethernet">
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <mac address="fa:16:3e:d8:82:c8"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <model type="virtio"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <mtu size="1442"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <target dev="tap89605073-2c"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     </interface>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <serial type="pty">
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <log file="/var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/console.log" append="off"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     </serial>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <video>
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <model type="virtio"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     </video>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <input type="tablet" bus="usb"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <rng model="virtio">
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <backend model="random">/dev/urandom</backend>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     </rng>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <controller type="usb" index="0"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     <memballoon model="virtio">
Oct 09 09:55:54 compute-1 nova_compute[162974]:       <stats period="10"/>
Oct 09 09:55:54 compute-1 nova_compute[162974]:     </memballoon>
Oct 09 09:55:54 compute-1 nova_compute[162974]:   </devices>
Oct 09 09:55:54 compute-1 nova_compute[162974]: </domain>
Oct 09 09:55:54 compute-1 nova_compute[162974]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.810 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Preparing to wait for external event network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.810 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.810 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.810 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.811 2 DEBUG nova.virt.libvirt.vif [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-452258331',display_name='tempest-TestNetworkBasicOps-server-452258331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-452258331',id=2,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBODLXk0rzffmHcNbUDYGLfUDc9LvP6gD0Cl2kTpN0VYCCLdQjTmH7i6AAWYqub8jT4Jlgu+DRbDcF0CjszX7mILwKGtZArFBrJ9e1Ud75exDORK7fEHNnUEihiwx6WpTPg==',key_name='tempest-TestNetworkBasicOps-68447822',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-g89vp4u8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:55:49Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=27831bd3-a756-4807-b9da-7be12d549265,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.811 2 DEBUG nova.network.os_vif_util [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.811 2 DEBUG nova.network.os_vif_util [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:82:c8,bridge_name='br-int',has_traffic_filtering=True,id=89605073-2c16-4e83-a34b-96c0ad203677,network=Network(ca25ffbc-c518-421a-acbc-33327ba74e5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89605073-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.812 2 DEBUG os_vif [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:82:c8,bridge_name='br-int',has_traffic_filtering=True,id=89605073-2c16-4e83-a34b-96c0ad203677,network=Network(ca25ffbc-c518-421a-acbc-33327ba74e5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89605073-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.842 2 DEBUG ovsdbapp.backend.ovs_idl [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.843 2 DEBUG ovsdbapp.backend.ovs_idl [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.843 2 DEBUG ovsdbapp.backend.ovs_idl [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLOUT] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.855 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.856 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.857 2 INFO oslo.privsep.daemon [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp8ct_gpgp/privsep.sock']
Oct 09 09:55:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3156609996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:55:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2397916184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:55:55 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:55.201 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.387 2 INFO oslo.privsep.daemon [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Spawned new privsep daemon via rootwrap
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.309 564 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.313 564 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.314 564 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.315 564 INFO oslo.privsep.daemon [-] privsep daemon running as pid 564
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.638 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89605073-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.638 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap89605073-2c, col_values=(('external_ids', {'iface-id': '89605073-2c16-4e83-a34b-96c0ad203677', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:82:c8', 'vm-uuid': '27831bd3-a756-4807-b9da-7be12d549265'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:55:55 compute-1 NetworkManager[982]: <info>  [1760003755.6413] manager: (tap89605073-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.646 2 INFO os_vif [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:82:c8,bridge_name='br-int',has_traffic_filtering=True,id=89605073-2c16-4e83-a34b-96c0ad203677,network=Network(ca25ffbc-c518-421a-acbc-33327ba74e5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89605073-2c')
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.676 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.677 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.677 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:d8:82:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.677 2 INFO nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Using config drive
Oct 09 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.696 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:55:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:55.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:56 compute-1 ceph-mon[9795]: pgmap v681: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 KiB/s wr, 8 op/s
Oct 09 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.166 2 INFO nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Creating config drive at /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/disk.config
Oct 09 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.170 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqwfl_n4v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.294 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqwfl_n4v" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.320 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.324 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/disk.config 27831bd3-a756-4807-b9da-7be12d549265_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.417 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/disk.config 27831bd3-a756-4807-b9da-7be12d549265_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.418 2 INFO nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Deleting local config drive /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/disk.config because it was imported into RBD.
Oct 09 09:55:56 compute-1 systemd[1]: Starting libvirt secret daemon...
Oct 09 09:55:56 compute-1 systemd[1]: Started libvirt secret daemon.
Oct 09 09:55:56 compute-1 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 09 09:55:56 compute-1 NetworkManager[982]: <info>  [1760003756.4946] manager: (tap89605073-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct 09 09:55:56 compute-1 kernel: tap89605073-2c: entered promiscuous mode
Oct 09 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:55:56 compute-1 ovn_controller[62080]: 2025-10-09T09:55:56Z|00027|binding|INFO|Claiming lport 89605073-2c16-4e83-a34b-96c0ad203677 for this chassis.
Oct 09 09:55:56 compute-1 ovn_controller[62080]: 2025-10-09T09:55:56Z|00028|binding|INFO|89605073-2c16-4e83-a34b-96c0ad203677: Claiming fa:16:3e:d8:82:c8 10.100.0.29
Oct 09 09:55:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.504 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:82:c8 10.100.0.29'], port_security=['fa:16:3e:d8:82:c8 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '27831bd3-a756-4807-b9da-7be12d549265', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ff7a1970-9c22-4d50-af6e-95dd0d807999', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2077437-af43-496a-b32b-28fd39fcc898, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=89605073-2c16-4e83-a34b-96c0ad203677) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:55:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.505 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 89605073-2c16-4e83-a34b-96c0ad203677 in datapath ca25ffbc-c518-421a-acbc-33327ba74e5f bound to our chassis
Oct 09 09:55:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.506 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca25ffbc-c518-421a-acbc-33327ba74e5f
Oct 09 09:55:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.507 71059 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpyq1hdpfk/privsep.sock']
Oct 09 09:55:56 compute-1 systemd-udevd[165617]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:55:56 compute-1 systemd-machined[120683]: New machine qemu-1-instance-00000002.
Oct 09 09:55:56 compute-1 NetworkManager[982]: <info>  [1760003756.5446] device (tap89605073-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:55:56 compute-1 NetworkManager[982]: <info>  [1760003756.5455] device (tap89605073-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 09:55:56 compute-1 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Oct 09 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:55:56 compute-1 ovn_controller[62080]: 2025-10-09T09:55:56Z|00029|binding|INFO|Setting lport 89605073-2c16-4e83-a34b-96c0ad203677 ovn-installed in OVS
Oct 09 09:55:56 compute-1 ovn_controller[62080]: 2025-10-09T09:55:56Z|00030|binding|INFO|Setting lport 89605073-2c16-4e83-a34b-96c0ad203677 up in Southbound
Oct 09 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:55:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:55:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:56.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.970 2 DEBUG nova.network.neutron [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Updated VIF entry in instance network info cache for port 89605073-2c16-4e83-a34b-96c0ad203677. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.971 2 DEBUG nova.network.neutron [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Updating instance_info_cache with network_info: [{"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.984 2 DEBUG oslo_concurrency.lockutils [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-27831bd3-a756-4807-b9da-7be12d549265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:57.047 71059 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 09 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:57.048 71059 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpyq1hdpfk/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 09 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.970 165637 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.973 165637 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.975 165637 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 09 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.975 165637 INFO oslo.privsep.daemon [-] privsep daemon running as pid 165637
Oct 09 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:57.051 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[b608eb11-924c-4f67-9cd4-ca63053b0b6b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.084 2 DEBUG nova.compute.manager [req-ec3807ff-860d-4080-b0f9-fe2fa1a61221 req-e79b9d21-041e-47bd-9fc6-ea952f14cf04 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received event network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.085 2 DEBUG oslo_concurrency.lockutils [req-ec3807ff-860d-4080-b0f9-fe2fa1a61221 req-e79b9d21-041e-47bd-9fc6-ea952f14cf04 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.085 2 DEBUG oslo_concurrency.lockutils [req-ec3807ff-860d-4080-b0f9-fe2fa1a61221 req-e79b9d21-041e-47bd-9fc6-ea952f14cf04 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.085 2 DEBUG oslo_concurrency.lockutils [req-ec3807ff-860d-4080-b0f9-fe2fa1a61221 req-e79b9d21-041e-47bd-9fc6-ea952f14cf04 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.086 2 DEBUG nova.compute.manager [req-ec3807ff-860d-4080-b0f9-fe2fa1a61221 req-e79b9d21-041e-47bd-9fc6-ea952f14cf04 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Processing event network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 09 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:57.512 165637 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:57.513 165637 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:57.513 165637 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.734 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003757.7333243, 27831bd3-a756-4807-b9da-7be12d549265 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.734 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] VM Started (Lifecycle Event)
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.736 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.746 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.750 2 INFO nova.virt.libvirt.driver [-] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Instance spawned successfully.
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.750 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.764 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.769 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.775 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.775 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.776 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.776 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.777 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.777 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.783 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.784 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003757.733423, 27831bd3-a756-4807-b9da-7be12d549265 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.784 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] VM Paused (Lifecycle Event)
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.801 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.804 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003757.7395015, 27831bd3-a756-4807-b9da-7be12d549265 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.804 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] VM Resumed (Lifecycle Event)
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.819 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.821 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 09:55:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:57.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.827 2 INFO nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Took 7.95 seconds to spawn the instance on the hypervisor.
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.828 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.835 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.870 2 INFO nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Took 8.79 seconds to build instance.
Oct 09 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.878 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.117 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6038448c-c0dc-4d63-81e7-0641a4c2c7f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.118 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapca25ffbc-c1 in ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 09 09:55:58 compute-1 ceph-mon[9795]: pgmap v682: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.122 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapca25ffbc-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.122 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[59aa6bab-e59b-4dc2-a612-2a9b6045ff45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.126 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[f7771151-1424-49c4-8b84-02639c631507]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.154 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[72a0a63c-3643-421d-951a-325e0f7656bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.178 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[be0e10f7-6963-49f9-a487-fe7588dcab2a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.181 71059 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpasadcvnl/privsep.sock']
Oct 09 09:55:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:58.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.823 71059 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.825 71059 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpasadcvnl/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.720 165694 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.727 165694 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.730 165694 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.731 165694 INFO oslo.privsep.daemon [-] privsep daemon running as pid 165694
Oct 09 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.830 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc2dd00-b999-41b8-ae57-68101534b43f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:55:59 compute-1 nova_compute[162974]: 2025-10-09 09:55:59.149 2 DEBUG nova.compute.manager [req-93c08c18-729a-4e80-9e06-d173bbb19ee3 req-5aef1bff-64bf-4738-b597-1d7a02e4e4b1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received event network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:55:59 compute-1 nova_compute[162974]: 2025-10-09 09:55:59.150 2 DEBUG oslo_concurrency.lockutils [req-93c08c18-729a-4e80-9e06-d173bbb19ee3 req-5aef1bff-64bf-4738-b597-1d7a02e4e4b1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:59 compute-1 nova_compute[162974]: 2025-10-09 09:55:59.150 2 DEBUG oslo_concurrency.lockutils [req-93c08c18-729a-4e80-9e06-d173bbb19ee3 req-5aef1bff-64bf-4738-b597-1d7a02e4e4b1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:59 compute-1 nova_compute[162974]: 2025-10-09 09:55:59.150 2 DEBUG oslo_concurrency.lockutils [req-93c08c18-729a-4e80-9e06-d173bbb19ee3 req-5aef1bff-64bf-4738-b597-1d7a02e4e4b1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:59 compute-1 nova_compute[162974]: 2025-10-09 09:55:59.150 2 DEBUG nova.compute.manager [req-93c08c18-729a-4e80-9e06-d173bbb19ee3 req-5aef1bff-64bf-4738-b597-1d7a02e4e4b1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] No waiting events found dispatching network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:55:59 compute-1 nova_compute[162974]: 2025-10-09 09:55:59.151 2 WARNING nova.compute.manager [req-93c08c18-729a-4e80-9e06-d173bbb19ee3 req-5aef1bff-64bf-4738-b597-1d7a02e4e4b1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received unexpected event network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 for instance with vm_state active and task_state None.
Oct 09 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.323 165694 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.323 165694 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.323 165694 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:55:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:59.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.861 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[dd794a77-095a-41c6-b5a3-5cf07d4eb739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:55:59 compute-1 NetworkManager[982]: <info>  [1760003759.8673] manager: (tapca25ffbc-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Oct 09 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.869 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[b945c2a8-02a9-4cc0-a685-3b227c6c0c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:55:59 compute-1 systemd-udevd[165705]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.908 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[74fae352-bf2e-4096-bd1e-65fabc639c0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.910 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb0d761-0805-469b-885c-1afb9d2212bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:55:59 compute-1 NetworkManager[982]: <info>  [1760003759.9294] device (tapca25ffbc-c0): carrier: link connected
Oct 09 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.934 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[8357530c-2606-4d49-8011-7173822c86cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.948 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[379439f4-fc3f-4fb6-bd54-094a26831853]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca25ffbc-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:52:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 144349, 'reachable_time': 41774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 165716, 'error': None, 'target': 'ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.961 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[dcaac7a2-e2b9-4a73-a619-6142a316c0d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:52f5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 144349, 'tstamp': 144349}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 165718, 'error': None, 'target': 'ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.975 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[26097517-2e32-458b-929a-8b09bde036b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca25ffbc-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:52:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 144349, 'reachable_time': 41774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 165719, 'error': None, 'target': 'ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.002 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[a7634eef-8631-4250-93c1-094893461b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.055 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa4f935-1e2b-48e6-ab6a-acf9a9acc21d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.059 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca25ffbc-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.059 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.060 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca25ffbc-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:56:00 compute-1 nova_compute[162974]: 2025-10-09 09:56:00.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:00 compute-1 NetworkManager[982]: <info>  [1760003760.0641] manager: (tapca25ffbc-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 09 09:56:00 compute-1 kernel: tapca25ffbc-c0: entered promiscuous mode
Oct 09 09:56:00 compute-1 nova_compute[162974]: 2025-10-09 09:56:00.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.070 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca25ffbc-c0, col_values=(('external_ids', {'iface-id': 'b963e480-a7bb-4169-89b3-7559ce9e7e8a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:56:00 compute-1 nova_compute[162974]: 2025-10-09 09:56:00.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:00 compute-1 ovn_controller[62080]: 2025-10-09T09:56:00Z|00031|binding|INFO|Releasing lport b963e480-a7bb-4169-89b3-7559ce9e7e8a from this chassis (sb_readonly=0)
Oct 09 09:56:00 compute-1 nova_compute[162974]: 2025-10-09 09:56:00.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.075 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ca25ffbc-c518-421a-acbc-33327ba74e5f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ca25ffbc-c518-421a-acbc-33327ba74e5f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.075 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[b783ce31-0228-4f6b-9ff2-b4415a51539e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.077 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: global
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     log         /dev/log local0 debug
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     log-tag     haproxy-metadata-proxy-ca25ffbc-c518-421a-acbc-33327ba74e5f
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     user        root
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     group       root
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     maxconn     1024
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     pidfile     /var/lib/neutron/external/pids/ca25ffbc-c518-421a-acbc-33327ba74e5f.pid.haproxy
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     daemon
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: defaults
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     log global
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     mode http
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     option httplog
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     option dontlognull
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     option http-server-close
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     option forwardfor
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     retries                 3
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     timeout http-request    30s
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     timeout connect         30s
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     timeout client          32s
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     timeout server          32s
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     timeout http-keep-alive 30s
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: listen listener
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     bind 169.254.169.254:80
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:     http-request add-header X-OVN-Network-ID ca25ffbc-c518-421a-acbc-33327ba74e5f
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 09 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.079 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'env', 'PROCESS_TAG=haproxy-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ca25ffbc-c518-421a-acbc-33327ba74e5f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 09 09:56:00 compute-1 nova_compute[162974]: 2025-10-09 09:56:00.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:00 compute-1 ceph-mon[9795]: pgmap v683: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 09 09:56:00 compute-1 podman[165748]: 2025-10-09 09:56:00.390184935 +0000 UTC m=+0.046313239 container create 926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:56:00 compute-1 systemd[1]: Started libpod-conmon-926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6.scope.
Oct 09 09:56:00 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:56:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad978998028dec93cc819757167a683773bfe6787359ca9c4afc87724a5a521/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 09:56:00 compute-1 podman[165748]: 2025-10-09 09:56:00.444577499 +0000 UTC m=+0.100705792 container init 926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 09 09:56:00 compute-1 podman[165748]: 2025-10-09 09:56:00.448962698 +0000 UTC m=+0.105090993 container start 926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:56:00 compute-1 podman[165748]: 2025-10-09 09:56:00.372272175 +0000 UTC m=+0.028400490 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 09:56:00 compute-1 neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f[165761]: [NOTICE]   (165765) : New worker (165767) forked
Oct 09 09:56:00 compute-1 neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f[165761]: [NOTICE]   (165765) : Loading success.
Oct 09 09:56:00 compute-1 nova_compute[162974]: 2025-10-09 09:56:00.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:00.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:00 compute-1 sudo[165772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:56:00 compute-1 sudo[165772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:00 compute-1 sudo[165772]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:00 compute-1 sudo[165797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 09 09:56:00 compute-1 sudo[165797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:01 compute-1 sudo[165797]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:01 compute-1 sudo[165840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:56:01 compute-1 sudo[165840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:01 compute-1 sudo[165840]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:01 compute-1 sudo[165865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:56:01 compute-1 sudo[165865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:01 compute-1 sudo[165865]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:01.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:02 compute-1 ceph-mon[9795]: pgmap v684: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 09 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:56:02 compute-1 nova_compute[162974]: 2025-10-09 09:56:02.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:02.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:03 compute-1 ceph-mon[9795]: pgmap v685: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Oct 09 09:56:03 compute-1 podman[165921]: 2025-10-09 09:56:03.553123908 +0000 UTC m=+0.062909866 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 09 09:56:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:03.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:04.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:05 compute-1 ceph-mon[9795]: pgmap v686: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Oct 09 09:56:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:56:05 compute-1 nova_compute[162974]: 2025-10-09 09:56:05.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1636] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Oct 09 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1642] device (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1651] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/30)
Oct 09 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1654] device (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1661] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 09 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1677] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct 09 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1682] device (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1686] device (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:56:05 compute-1 nova_compute[162974]: 2025-10-09 09:56:05.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:05 compute-1 ovn_controller[62080]: 2025-10-09T09:56:05Z|00032|binding|INFO|Releasing lport b963e480-a7bb-4169-89b3-7559ce9e7e8a from this chassis (sb_readonly=0)
Oct 09 09:56:05 compute-1 nova_compute[162974]: 2025-10-09 09:56:05.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:05 compute-1 sudo[165946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:56:05 compute-1 sudo[165946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:05 compute-1 sudo[165946]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:05 compute-1 nova_compute[162974]: 2025-10-09 09:56:05.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:05.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:06 compute-1 ceph-mon[9795]: pgmap v687: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 114 op/s
Oct 09 09:56:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:06.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:07 compute-1 nova_compute[162974]: 2025-10-09 09:56:07.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:07.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:08 compute-1 ceph-mon[9795]: pgmap v688: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 83 op/s
Oct 09 09:56:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:08.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:09 compute-1 ovn_controller[62080]: 2025-10-09T09:56:09Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:82:c8 10.100.0.29
Oct 09 09:56:09 compute-1 ovn_controller[62080]: 2025-10-09T09:56:09Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:82:c8 10.100.0.29
Oct 09 09:56:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:09.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:10.034 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:10.035 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:10.035 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:10 compute-1 nova_compute[162974]: 2025-10-09 09:56:10.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:56:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:10.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:56:10 compute-1 ceph-mon[9795]: pgmap v689: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 83 op/s
Oct 09 09:56:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:11 compute-1 sudo[165976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:56:11 compute-1 sudo[165976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:11 compute-1 sudo[165976]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:11.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:12 compute-1 nova_compute[162974]: 2025-10-09 09:56:12.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:12.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:12 compute-1 ceph-mon[9795]: pgmap v690: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 2.4 MiB/s wr, 67 op/s
Oct 09 09:56:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/335239506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:56:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/335239506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:56:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:13.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:14 compute-1 podman[166002]: 2025-10-09 09:56:14.530249008 +0000 UTC m=+0.038624750 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 09 09:56:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:14.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:14 compute-1 ceph-mon[9795]: pgmap v691: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 09 09:56:15 compute-1 nova_compute[162974]: 2025-10-09 09:56:15.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct 09 09:56:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct 09 09:56:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct 09 09:56:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct 09 09:56:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct 09 09:56:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:15.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:16.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:16 compute-1 ceph-mon[9795]: pgmap v692: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 184 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 09 09:56:17 compute-1 nova_compute[162974]: 2025-10-09 09:56:17.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:56:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:17.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.223 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.224 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.224 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.224 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.224 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.225 2 INFO nova.compute.manager [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Terminating instance
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.231 2 DEBUG nova.compute.manager [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 09 09:56:18 compute-1 kernel: tap89605073-2c (unregistering): left promiscuous mode
Oct 09 09:56:18 compute-1 NetworkManager[982]: <info>  [1760003778.2726] device (tap89605073-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 09:56:18 compute-1 ovn_controller[62080]: 2025-10-09T09:56:18Z|00033|binding|INFO|Releasing lport 89605073-2c16-4e83-a34b-96c0ad203677 from this chassis (sb_readonly=0)
Oct 09 09:56:18 compute-1 ovn_controller[62080]: 2025-10-09T09:56:18Z|00034|binding|INFO|Setting lport 89605073-2c16-4e83-a34b-96c0ad203677 down in Southbound
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:18 compute-1 ovn_controller[62080]: 2025-10-09T09:56:18Z|00035|binding|INFO|Removing iface tap89605073-2c ovn-installed in OVS
Oct 09 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.287 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:82:c8 10.100.0.29'], port_security=['fa:16:3e:d8:82:c8 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '27831bd3-a756-4807-b9da-7be12d549265', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ff7a1970-9c22-4d50-af6e-95dd0d807999', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2077437-af43-496a-b32b-28fd39fcc898, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=89605073-2c16-4e83-a34b-96c0ad203677) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.289 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 89605073-2c16-4e83-a34b-96c0ad203677 in datapath ca25ffbc-c518-421a-acbc-33327ba74e5f unbound from our chassis
Oct 09 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.290 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca25ffbc-c518-421a-acbc-33327ba74e5f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.291 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[30e07514-f643-495b-a2e4-af36b2bef7d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.292 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f namespace which is not needed anymore
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:18 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct 09 09:56:18 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 12.014s CPU time.
Oct 09 09:56:18 compute-1 systemd-machined[120683]: Machine qemu-1-instance-00000002 terminated.
Oct 09 09:56:18 compute-1 neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f[165761]: [NOTICE]   (165765) : haproxy version is 2.8.14-c23fe91
Oct 09 09:56:18 compute-1 neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f[165761]: [NOTICE]   (165765) : path to executable is /usr/sbin/haproxy
Oct 09 09:56:18 compute-1 neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f[165761]: [ALERT]    (165765) : Current worker (165767) exited with code 143 (Terminated)
Oct 09 09:56:18 compute-1 neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f[165761]: [WARNING]  (165765) : All workers exited. Exiting... (0)
Oct 09 09:56:18 compute-1 systemd[1]: libpod-926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6.scope: Deactivated successfully.
Oct 09 09:56:18 compute-1 conmon[165761]: conmon 926673072c2de5c9c8c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6.scope/container/memory.events
Oct 09 09:56:18 compute-1 podman[166043]: 2025-10-09 09:56:18.395290384 +0000 UTC m=+0.035716796 container died 926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:56:18 compute-1 systemd[1]: var-lib-containers-storage-overlay-0ad978998028dec93cc819757167a683773bfe6787359ca9c4afc87724a5a521-merged.mount: Deactivated successfully.
Oct 09 09:56:18 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6-userdata-shm.mount: Deactivated successfully.
Oct 09 09:56:18 compute-1 podman[166043]: 2025-10-09 09:56:18.419191296 +0000 UTC m=+0.059617708 container cleanup 926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 09 09:56:18 compute-1 systemd[1]: libpod-conmon-926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6.scope: Deactivated successfully.
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.458 2 INFO nova.virt.libvirt.driver [-] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Instance destroyed successfully.
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.459 2 DEBUG nova.objects.instance [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid 27831bd3-a756-4807-b9da-7be12d549265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.469 2 DEBUG nova.virt.libvirt.vif [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-452258331',display_name='tempest-TestNetworkBasicOps-server-452258331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-452258331',id=2,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBODLXk0rzffmHcNbUDYGLfUDc9LvP6gD0Cl2kTpN0VYCCLdQjTmH7i6AAWYqub8jT4Jlgu+DRbDcF0CjszX7mILwKGtZArFBrJ9e1Ud75exDORK7fEHNnUEihiwx6WpTPg==',key_name='tempest-TestNetworkBasicOps-68447822',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:55:57Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-g89vp4u8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:55:57Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=27831bd3-a756-4807-b9da-7be12d549265,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.470 2 DEBUG nova.network.os_vif_util [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.471 2 DEBUG nova.network.os_vif_util [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:82:c8,bridge_name='br-int',has_traffic_filtering=True,id=89605073-2c16-4e83-a34b-96c0ad203677,network=Network(ca25ffbc-c518-421a-acbc-33327ba74e5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89605073-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.471 2 DEBUG os_vif [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:82:c8,bridge_name='br-int',has_traffic_filtering=True,id=89605073-2c16-4e83-a34b-96c0ad203677,network=Network(ca25ffbc-c518-421a-acbc-33327ba74e5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89605073-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89605073-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:56:18 compute-1 podman[166075]: 2025-10-09 09:56:18.474194191 +0000 UTC m=+0.039165117 container remove 926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.479 2 INFO os_vif [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:82:c8,bridge_name='br-int',has_traffic_filtering=True,id=89605073-2c16-4e83-a34b-96c0ad203677,network=Network(ca25ffbc-c518-421a-acbc-33327ba74e5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89605073-2c')
Oct 09 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.482 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[9af83b47-069f-4db0-b60d-518806193a4f]: (4, ('Thu Oct  9 09:56:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f (926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6)\n926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6\nThu Oct  9 09:56:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f (926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6)\n926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.483 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[efb526eb-7744-4a1b-955b-52d12bdfc968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.484 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca25ffbc-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:56:18 compute-1 kernel: tapca25ffbc-c0: left promiscuous mode
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.503 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf34723-dbe9-40b7-900a-03cb55a78d3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.518 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[5e00e52f-d53f-4502-8729-66a69e3f8699]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.520 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e0a431-0d37-4d91-b7ad-a93e3aae09ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.532 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[be64e426-651d-4c68-9002-cebd5369989c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 144342, 'reachable_time': 20258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 166117, 'error': None, 'target': 'ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:18 compute-1 systemd[1]: run-netns-ovnmeta\x2dca25ffbc\x2dc518\x2d421a\x2dacbc\x2d33327ba74e5f.mount: Deactivated successfully.
Oct 09 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.541 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 09 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.541 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[6c573294-4fc3-4605-914d-4ae534e7eb90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.600 2 DEBUG nova.compute.manager [req-8926c1d5-5a45-4def-9a8e-ca47171677d0 req-56f1e27e-7b29-4834-b971-a4ba2b30adf3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received event network-vif-unplugged-89605073-2c16-4e83-a34b-96c0ad203677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.600 2 DEBUG oslo_concurrency.lockutils [req-8926c1d5-5a45-4def-9a8e-ca47171677d0 req-56f1e27e-7b29-4834-b971-a4ba2b30adf3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.600 2 DEBUG oslo_concurrency.lockutils [req-8926c1d5-5a45-4def-9a8e-ca47171677d0 req-56f1e27e-7b29-4834-b971-a4ba2b30adf3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.601 2 DEBUG oslo_concurrency.lockutils [req-8926c1d5-5a45-4def-9a8e-ca47171677d0 req-56f1e27e-7b29-4834-b971-a4ba2b30adf3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.601 2 DEBUG nova.compute.manager [req-8926c1d5-5a45-4def-9a8e-ca47171677d0 req-56f1e27e-7b29-4834-b971-a4ba2b30adf3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] No waiting events found dispatching network-vif-unplugged-89605073-2c16-4e83-a34b-96c0ad203677 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.601 2 DEBUG nova.compute.manager [req-8926c1d5-5a45-4def-9a8e-ca47171677d0 req-56f1e27e-7b29-4834-b971-a4ba2b30adf3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received event network-vif-unplugged-89605073-2c16-4e83-a34b-96c0ad203677 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.660 2 INFO nova.virt.libvirt.driver [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Deleting instance files /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265_del
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.660 2 INFO nova.virt.libvirt.driver [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Deletion of /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265_del complete
Oct 09 09:56:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:18.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:18 compute-1 ceph-mon[9795]: pgmap v693: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.710 2 DEBUG nova.virt.libvirt.host [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.711 2 INFO nova.virt.libvirt.host [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] UEFI support detected
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.712 2 INFO nova.compute.manager [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Took 0.48 seconds to destroy the instance on the hypervisor.
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.713 2 DEBUG oslo.service.loopingcall [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.713 2 DEBUG nova.compute.manager [-] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 09 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.713 2 DEBUG nova.network.neutron [-] [instance: 27831bd3-a756-4807-b9da-7be12d549265] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 09 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.109 2 DEBUG nova.network.neutron [-] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.220 2 INFO nova.compute.manager [-] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Took 0.51 seconds to deallocate network for instance.
Oct 09 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.253 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.253 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.288 2 DEBUG oslo_concurrency.processutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:56:19 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:56:19 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/12413591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.629 2 DEBUG oslo_concurrency.processutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.634 2 DEBUG nova.compute.provider_tree [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Updating inventory in ProviderTree for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.670 2 DEBUG nova.scheduler.client.report [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Updated inventory for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 09 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.670 2 DEBUG nova.compute.provider_tree [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Updating resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 09 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.671 2 DEBUG nova.compute.provider_tree [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Updating inventory in ProviderTree for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.683 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.702 2 INFO nova.scheduler.client.report [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance 27831bd3-a756-4807-b9da-7be12d549265
Oct 09 09:56:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:56:19 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/12413591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.747 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:19.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:20 compute-1 nova_compute[162974]: 2025-10-09 09:56:20.673 2 DEBUG nova.compute.manager [req-638a2aec-aadc-4c81-a020-e97d25b39d39 req-eb81dec0-3174-4e7d-a447-c8ed3e81885f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received event network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:56:20 compute-1 nova_compute[162974]: 2025-10-09 09:56:20.674 2 DEBUG oslo_concurrency.lockutils [req-638a2aec-aadc-4c81-a020-e97d25b39d39 req-eb81dec0-3174-4e7d-a447-c8ed3e81885f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:20 compute-1 nova_compute[162974]: 2025-10-09 09:56:20.674 2 DEBUG oslo_concurrency.lockutils [req-638a2aec-aadc-4c81-a020-e97d25b39d39 req-eb81dec0-3174-4e7d-a447-c8ed3e81885f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:20 compute-1 nova_compute[162974]: 2025-10-09 09:56:20.674 2 DEBUG oslo_concurrency.lockutils [req-638a2aec-aadc-4c81-a020-e97d25b39d39 req-eb81dec0-3174-4e7d-a447-c8ed3e81885f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:20 compute-1 nova_compute[162974]: 2025-10-09 09:56:20.674 2 DEBUG nova.compute.manager [req-638a2aec-aadc-4c81-a020-e97d25b39d39 req-eb81dec0-3174-4e7d-a447-c8ed3e81885f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] No waiting events found dispatching network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:56:20 compute-1 nova_compute[162974]: 2025-10-09 09:56:20.675 2 WARNING nova.compute.manager [req-638a2aec-aadc-4c81-a020-e97d25b39d39 req-eb81dec0-3174-4e7d-a447-c8ed3e81885f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received unexpected event network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 for instance with vm_state deleted and task_state None.
Oct 09 09:56:20 compute-1 nova_compute[162974]: 2025-10-09 09:56:20.675 2 DEBUG nova.compute.manager [req-638a2aec-aadc-4c81-a020-e97d25b39d39 req-eb81dec0-3174-4e7d-a447-c8ed3e81885f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received event network-vif-deleted-89605073-2c16-4e83-a34b-96c0ad203677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:56:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:20.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:20 compute-1 ceph-mon[9795]: pgmap v694: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 09 09:56:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:21.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:22 compute-1 podman[166145]: 2025-10-09 09:56:22.533216367 +0000 UTC m=+0.043376791 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 09 09:56:22 compute-1 podman[166144]: 2025-10-09 09:56:22.554177916 +0000 UTC m=+0.065428757 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 09 09:56:22 compute-1 nova_compute[162974]: 2025-10-09 09:56:22.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:56:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:22.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:56:22 compute-1 ceph-mon[9795]: pgmap v695: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.2 MiB/s wr, 232 op/s
Oct 09 09:56:23 compute-1 nova_compute[162974]: 2025-10-09 09:56:23.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:23.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:24.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:24 compute-1 ceph-mon[9795]: pgmap v696: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 24 KiB/s wr, 173 op/s
Oct 09 09:56:25 compute-1 nova_compute[162974]: 2025-10-09 09:56:25.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:25 compute-1 nova_compute[162974]: 2025-10-09 09:56:25.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:25.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:26.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:26 compute-1 ceph-mon[9795]: pgmap v697: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 24 KiB/s wr, 173 op/s
Oct 09 09:56:27 compute-1 nova_compute[162974]: 2025-10-09 09:56:27.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:27.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:28 compute-1 nova_compute[162974]: 2025-10-09 09:56:28.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:28.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:28 compute-1 ceph-mon[9795]: pgmap v698: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 22 KiB/s wr, 172 op/s
Oct 09 09:56:28 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2733332451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:29.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:30.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:30 compute-1 ceph-mon[9795]: pgmap v699: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 11 KiB/s wr, 172 op/s
Oct 09 09:56:31 compute-1 sudo[166186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:56:31 compute-1 sudo[166186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:31 compute-1 sudo[166186]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:31.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:32 compute-1 nova_compute[162974]: 2025-10-09 09:56:32.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:32.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:32 compute-1 ceph-mon[9795]: pgmap v700: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 12 KiB/s wr, 200 op/s
Oct 09 09:56:33 compute-1 nova_compute[162974]: 2025-10-09 09:56:33.457 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760003778.4563627, 27831bd3-a756-4807-b9da-7be12d549265 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:56:33 compute-1 nova_compute[162974]: 2025-10-09 09:56:33.457 2 INFO nova.compute.manager [-] [instance: 27831bd3-a756-4807-b9da-7be12d549265] VM Stopped (Lifecycle Event)
Oct 09 09:56:33 compute-1 nova_compute[162974]: 2025-10-09 09:56:33.472 2 DEBUG nova.compute.manager [None req-b441c2c0-af09-4be8-94a1-a785b8f3abda - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:56:33 compute-1 nova_compute[162974]: 2025-10-09 09:56:33.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:33.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:34 compute-1 podman[166212]: 2025-10-09 09:56:34.568301446 +0000 UTC m=+0.079803405 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 09 09:56:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:34.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:34 compute-1 ceph-mon[9795]: pgmap v701: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 09 09:56:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:56:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:35.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:36.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:36 compute-1 ceph-mon[9795]: pgmap v702: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 09 09:56:37 compute-1 nova_compute[162974]: 2025-10-09 09:56:37.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:37.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:38 compute-1 nova_compute[162974]: 2025-10-09 09:56:38.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:38.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:38 compute-1 ceph-mon[9795]: pgmap v703: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 09 09:56:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:39.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:40.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:40 compute-1 ceph-mon[9795]: pgmap v704: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 09 09:56:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:41.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.136 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.137 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.147 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.200 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.200 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.204 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.204 2 INFO nova.compute.claims [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Claim successful on node compute-1.ctlplane.example.com
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.265 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:56:42 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:56:42 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1320803762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.606 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.610 2 DEBUG nova.compute.provider_tree [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.622 2 DEBUG nova.scheduler.client.report [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.634 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.635 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.664 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.665 2 DEBUG nova.network.neutron [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.678 2 INFO nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.690 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 09 09:56:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:42.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.747 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.748 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.748 2 INFO nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Creating image(s)
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.767 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.785 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:56:42 compute-1 ceph-mon[9795]: pgmap v705: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 09 09:56:42 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1320803762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.807 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.809 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.854 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.855 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.856 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.856 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.873 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.875 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.952 2 DEBUG nova.policy [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 09 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.016 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.064 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 09 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.122 2 DEBUG nova.objects.instance [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.132 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 09 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.132 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Ensure instance console log exists: /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 09 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.133 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.133 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.133 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:43.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.138 2 DEBUG nova.network.neutron [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Successfully created port: 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 09 09:56:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:44.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.728 2 DEBUG nova.network.neutron [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Successfully updated port: 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 09 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.740 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.741 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.741 2 DEBUG nova.network.neutron [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 09 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.794 2 DEBUG nova.compute.manager [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-changed-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.794 2 DEBUG nova.compute.manager [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing instance network info cache due to event network-changed-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.794 2 DEBUG oslo_concurrency.lockutils [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:56:44 compute-1 ceph-mon[9795]: pgmap v706: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.855 2 DEBUG nova.network.neutron [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.346 2 DEBUG nova.network.neutron [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.358 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.358 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Instance network_info: |[{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.358 2 DEBUG oslo_concurrency.lockutils [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.359 2 DEBUG nova.network.neutron [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing network info cache for port 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.360 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Start _get_guest_xml network_info=[{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.364 2 WARNING nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.367 2 DEBUG nova.virt.libvirt.host [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.367 2 DEBUG nova.virt.libvirt.host [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.372 2 DEBUG nova.virt.libvirt.host [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.372 2 DEBUG nova.virt.libvirt.host [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.372 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.372 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.373 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.373 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.373 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.373 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.373 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.373 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.373 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.374 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.374 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.374 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.376 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:56:45 compute-1 podman[166449]: 2025-10-09 09:56:45.530274391 +0000 UTC m=+0.044885776 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 09 09:56:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 09:56:45 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1755900932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.716 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:56:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.734 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.737 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:56:45 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1755900932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:56:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:45.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 09:56:46 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1438079082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.089 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.091 2 DEBUG nova.virt.libvirt.vif [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-61543066',display_name='tempest-TestNetworkBasicOps-server-61543066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-61543066',id=3,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnE/dh71/I//FMlnppXDYKeeVJI2AqRfz3zTsFDUtMRPxSA9tfNCqu4Aqk04nGOjV/84C+cdkyXsPC0ZVfjXVfqYm026xBvCeeUUr4XUs/4snX/KNbtJXkvo3sUoZJ5aQ==',key_name='tempest-TestNetworkBasicOps-764715585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-r33pvc0t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:56:42Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.091 2 DEBUG nova.network.os_vif_util [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.092 2 DEBUG nova.network.os_vif_util [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:30:c8,bridge_name='br-int',has_traffic_filtering=True,id=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f,network=Network(26c660ed-37e9-4f44-b603-3901342edf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d2d29b3-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.092 2 DEBUG nova.objects.instance [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.107 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] End _get_guest_xml xml=<domain type="kvm">
Oct 09 09:56:46 compute-1 nova_compute[162974]:   <uuid>e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</uuid>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   <name>instance-00000003</name>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   <memory>131072</memory>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   <vcpu>1</vcpu>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   <metadata>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <nova:name>tempest-TestNetworkBasicOps-server-61543066</nova:name>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <nova:creationTime>2025-10-09 09:56:45</nova:creationTime>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <nova:flavor name="m1.nano">
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <nova:memory>128</nova:memory>
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <nova:disk>1</nova:disk>
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <nova:swap>0</nova:swap>
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <nova:vcpus>1</nova:vcpus>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       </nova:flavor>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <nova:owner>
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       </nova:owner>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <nova:ports>
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <nova:port uuid="8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f">
Oct 09 09:56:46 compute-1 nova_compute[162974]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:         </nova:port>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       </nova:ports>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     </nova:instance>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   </metadata>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   <sysinfo type="smbios">
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <system>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <entry name="manufacturer">RDO</entry>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <entry name="product">OpenStack Compute</entry>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <entry name="serial">e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</entry>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <entry name="uuid">e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</entry>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <entry name="family">Virtual Machine</entry>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     </system>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   </sysinfo>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   <os>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <boot dev="hd"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <smbios mode="sysinfo"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   </os>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   <features>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <acpi/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <apic/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <vmcoreinfo/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   </features>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   <clock offset="utc">
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <timer name="hpet" present="no"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   </clock>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   <cpu mode="host-model" match="exact">
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   </cpu>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   <devices>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <disk type="network" device="disk">
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <driver type="raw" cache="none"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <source protocol="rbd" name="vms/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk">
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <host name="192.168.122.100" port="6789"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <host name="192.168.122.102" port="6789"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <host name="192.168.122.101" port="6789"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       </source>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <auth username="openstack">
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       </auth>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <target dev="vda" bus="virtio"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <disk type="network" device="cdrom">
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <driver type="raw" cache="none"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <source protocol="rbd" name="vms/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config">
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <host name="192.168.122.100" port="6789"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <host name="192.168.122.102" port="6789"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <host name="192.168.122.101" port="6789"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       </source>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <auth username="openstack">
Oct 09 09:56:46 compute-1 nova_compute[162974]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       </auth>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <target dev="sda" bus="sata"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <interface type="ethernet">
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <mac address="fa:16:3e:4d:30:c8"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <model type="virtio"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <mtu size="1442"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <target dev="tap8d2d29b3-65"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     </interface>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <serial type="pty">
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <log file="/var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/console.log" append="off"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     </serial>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <video>
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <model type="virtio"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     </video>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <input type="tablet" bus="usb"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <rng model="virtio">
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <backend model="random">/dev/urandom</backend>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     </rng>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <controller type="usb" index="0"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     <memballoon model="virtio">
Oct 09 09:56:46 compute-1 nova_compute[162974]:       <stats period="10"/>
Oct 09 09:56:46 compute-1 nova_compute[162974]:     </memballoon>
Oct 09 09:56:46 compute-1 nova_compute[162974]:   </devices>
Oct 09 09:56:46 compute-1 nova_compute[162974]: </domain>
Oct 09 09:56:46 compute-1 nova_compute[162974]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.108 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Preparing to wait for external event network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.108 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.109 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.109 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.109 2 DEBUG nova.virt.libvirt.vif [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-61543066',display_name='tempest-TestNetworkBasicOps-server-61543066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-61543066',id=3,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnE/dh71/I//FMlnppXDYKeeVJI2AqRfz3zTsFDUtMRPxSA9tfNCqu4Aqk04nGOjV/84C+cdkyXsPC0ZVfjXVfqYm026xBvCeeUUr4XUs/4snX/KNbtJXkvo3sUoZJ5aQ==',key_name='tempest-TestNetworkBasicOps-764715585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-r33pvc0t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:56:42Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.109 2 DEBUG nova.network.os_vif_util [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.110 2 DEBUG nova.network.os_vif_util [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:30:c8,bridge_name='br-int',has_traffic_filtering=True,id=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f,network=Network(26c660ed-37e9-4f44-b603-3901342edf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d2d29b3-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.110 2 DEBUG os_vif [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:30:c8,bridge_name='br-int',has_traffic_filtering=True,id=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f,network=Network(26c660ed-37e9-4f44-b603-3901342edf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d2d29b3-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.111 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.111 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d2d29b3-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d2d29b3-65, col_values=(('external_ids', {'iface-id': '8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:30:c8', 'vm-uuid': 'e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:56:46 compute-1 NetworkManager[982]: <info>  [1760003806.1165] manager: (tap8d2d29b3-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.122 2 INFO os_vif [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:30:c8,bridge_name='br-int',has_traffic_filtering=True,id=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f,network=Network(26c660ed-37e9-4f44-b603-3901342edf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d2d29b3-65')
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.155 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.156 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.156 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:4d:30:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.157 2 INFO nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Using config drive
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.178 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.437 2 INFO nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Creating config drive at /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/disk.config
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.442 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo_bzdxk5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.562 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo_bzdxk5" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.588 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.592 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/disk.config e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.687 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/disk.config e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.688 2 INFO nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Deleting local config drive /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/disk.config because it was imported into RBD.
Oct 09 09:56:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:46.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:46 compute-1 NetworkManager[982]: <info>  [1760003806.7354] manager: (tap8d2d29b3-65): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Oct 09 09:56:46 compute-1 kernel: tap8d2d29b3-65: entered promiscuous mode
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:46 compute-1 ovn_controller[62080]: 2025-10-09T09:56:46Z|00036|binding|INFO|Claiming lport 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f for this chassis.
Oct 09 09:56:46 compute-1 ovn_controller[62080]: 2025-10-09T09:56:46Z|00037|binding|INFO|8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f: Claiming fa:16:3e:4d:30:c8 10.100.0.7
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.749 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:30:c8 10.100.0.7'], port_security=['fa:16:3e:4d:30:c8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26c660ed-37e9-4f44-b603-3901342edf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fd66aef-c4b5-4f4c-ae18-6ccc210d224e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4edfdfe9-a5ca-4224-9930-4324a48b984f, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.750 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f in datapath 26c660ed-37e9-4f44-b603-3901342edf9b bound to our chassis
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.751 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26c660ed-37e9-4f44-b603-3901342edf9b
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.762 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[9a597163-bdc2-47b9-84cc-ad34ebfff3a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.762 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26c660ed-31 in ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.765 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26c660ed-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.765 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3a2903-87b2-4992-b43a-f211fc2e3b04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.766 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[621711f3-4b54-42ec-8f88-b8d4d2a30581]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:46 compute-1 systemd-machined[120683]: New machine qemu-2-instance-00000003.
Oct 09 09:56:46 compute-1 systemd-udevd[166582]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:56:46 compute-1 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Oct 09 09:56:46 compute-1 NetworkManager[982]: <info>  [1760003806.7863] device (tap8d2d29b3-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:56:46 compute-1 NetworkManager[982]: <info>  [1760003806.7868] device (tap8d2d29b3-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.783 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[7602c751-40ff-4982-8623-d7954a7be632]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.809 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c1d771-9c68-4e92-9de9-aa0ec93668d8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:46 compute-1 ceph-mon[9795]: pgmap v707: 337 pgs: 337 active+clean; 54 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 336 KiB/s wr, 4 op/s
Oct 09 09:56:46 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1438079082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:56:46 compute-1 ovn_controller[62080]: 2025-10-09T09:56:46Z|00038|binding|INFO|Setting lport 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f ovn-installed in OVS
Oct 09 09:56:46 compute-1 ovn_controller[62080]: 2025-10-09T09:56:46Z|00039|binding|INFO|Setting lport 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f up in Southbound
Oct 09 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.849 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[a112979e-7527-4f78-8d88-514bee31d9a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:46 compute-1 NetworkManager[982]: <info>  [1760003806.8541] manager: (tap26c660ed-30): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Oct 09 09:56:46 compute-1 systemd-udevd[166585]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.853 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[0b627ffc-9055-4dd7-b212-a89a447ee896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.886 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb7f80d-be8a-4ffe-828d-40ba9afd15b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.888 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[46cdf40f-5898-4915-ae4c-54a6cc5c4fde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:46 compute-1 NetworkManager[982]: <info>  [1760003806.9110] device (tap26c660ed-30): carrier: link connected
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.915 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[b7792502-ab2f-4430-bc96-040a99a0a0c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.929 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[21724580-5228-4ec9-a478-cd2b1ce5d993]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26c660ed-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:1c:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 149048, 'reachable_time': 32133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 166606, 'error': None, 'target': 'ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.953 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[aad70e92-bc5d-434a-a6c8-38d5910e2b86]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:1ca4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 149048, 'tstamp': 149048}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 166607, 'error': None, 'target': 'ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.977 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[62a532a4-811d-452e-aa95-7e20eee588ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26c660ed-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:1c:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 149048, 'reachable_time': 32133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 166608, 'error': None, 'target': 'ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.012 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[df144f35-cebd-4459-8e12-f975393754ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.035 2 DEBUG nova.compute.manager [req-6761047f-1ad9-4d58-b15e-2aa235ed9a3d req-267c63be-1415-417c-a90f-60734ef80d5a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.035 2 DEBUG oslo_concurrency.lockutils [req-6761047f-1ad9-4d58-b15e-2aa235ed9a3d req-267c63be-1415-417c-a90f-60734ef80d5a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.035 2 DEBUG oslo_concurrency.lockutils [req-6761047f-1ad9-4d58-b15e-2aa235ed9a3d req-267c63be-1415-417c-a90f-60734ef80d5a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.035 2 DEBUG oslo_concurrency.lockutils [req-6761047f-1ad9-4d58-b15e-2aa235ed9a3d req-267c63be-1415-417c-a90f-60734ef80d5a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.036 2 DEBUG nova.compute.manager [req-6761047f-1ad9-4d58-b15e-2aa235ed9a3d req-267c63be-1415-417c-a90f-60734ef80d5a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Processing event network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.055 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ef2391-7b9a-4a9a-a3a8-19a9df0ebb58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.056 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26c660ed-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.056 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.057 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26c660ed-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:56:47 compute-1 kernel: tap26c660ed-30: entered promiscuous mode
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.058 2 DEBUG nova.network.neutron [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updated VIF entry in instance network info cache for port 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.060 2 DEBUG nova.network.neutron [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:56:47 compute-1 NetworkManager[982]: <info>  [1760003807.0602] manager: (tap26c660ed-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.062 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26c660ed-30, col_values=(('external_ids', {'iface-id': '57354100-1abc-4399-a76b-c42eaec1ad73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:56:47 compute-1 ovn_controller[62080]: 2025-10-09T09:56:47Z|00040|binding|INFO|Releasing lport 57354100-1abc-4399-a76b-c42eaec1ad73 from this chassis (sb_readonly=0)
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.065 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26c660ed-37e9-4f44-b603-3901342edf9b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26c660ed-37e9-4f44-b603-3901342edf9b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.066 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6816ea77-7d60-425d-9cde-290f72da4e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.067 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: global
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     log         /dev/log local0 debug
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     log-tag     haproxy-metadata-proxy-26c660ed-37e9-4f44-b603-3901342edf9b
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     user        root
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     group       root
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     maxconn     1024
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     pidfile     /var/lib/neutron/external/pids/26c660ed-37e9-4f44-b603-3901342edf9b.pid.haproxy
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     daemon
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: defaults
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     log global
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     mode http
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     option httplog
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     option dontlognull
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     option http-server-close
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     option forwardfor
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     retries                 3
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     timeout http-request    30s
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     timeout connect         30s
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     timeout client          32s
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     timeout server          32s
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     timeout http-keep-alive 30s
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: listen listener
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     bind 169.254.169.254:80
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:     http-request add-header X-OVN-Network-ID 26c660ed-37e9-4f44-b603-3901342edf9b
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 09 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.069 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b', 'env', 'PROCESS_TAG=haproxy-26c660ed-37e9-4f44-b603-3901342edf9b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26c660ed-37e9-4f44-b603-3901342edf9b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.072 2 DEBUG oslo_concurrency.lockutils [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:47 compute-1 podman[166679]: 2025-10-09 09:56:47.449828363 +0000 UTC m=+0.036846626 container create e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 09 09:56:47 compute-1 systemd[1]: Started libpod-conmon-e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c.scope.
Oct 09 09:56:47 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:56:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a45d62be4bd9ce9df4641fb90075d2091d46c2e93ad8f4010bfee1112d2e50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.518 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.520 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003807.517912, e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.521 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] VM Started (Lifecycle Event)
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.524 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 09 09:56:47 compute-1 podman[166679]: 2025-10-09 09:56:47.526267311 +0000 UTC m=+0.113285584 container init e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.530 2 INFO nova.virt.libvirt.driver [-] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Instance spawned successfully.
Oct 09 09:56:47 compute-1 podman[166679]: 2025-10-09 09:56:47.434183329 +0000 UTC m=+0.021201602 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.531 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 09 09:56:47 compute-1 podman[166679]: 2025-10-09 09:56:47.533790568 +0000 UTC m=+0.120808821 container start e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.536 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.543 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.547 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.548 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.548 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.549 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.549 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.549 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:56:47 compute-1 neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b[166691]: [NOTICE]   (166695) : New worker (166697) forked
Oct 09 09:56:47 compute-1 neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b[166691]: [NOTICE]   (166695) : Loading success.
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.556 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.556 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003807.5201333, e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.557 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] VM Paused (Lifecycle Event)
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.568 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.570 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003807.523251, e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.570 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] VM Resumed (Lifecycle Event)
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.592 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.594 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.611 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.618 2 INFO nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Took 4.87 seconds to spawn the instance on the hypervisor.
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.619 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.657 2 INFO nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Took 5.48 seconds to build instance.
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.669 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:47.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:48 compute-1 nova_compute[162974]: 2025-10-09 09:56:48.123 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:48.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:48 compute-1 ceph-mon[9795]: pgmap v708: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.091 2 DEBUG nova.compute.manager [req-d52c374a-27ef-4fe4-8d23-d187e80e0aff req-40bf7869-3ebb-4f25-a81e-62f38ae0fc8c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.092 2 DEBUG oslo_concurrency.lockutils [req-d52c374a-27ef-4fe4-8d23-d187e80e0aff req-40bf7869-3ebb-4f25-a81e-62f38ae0fc8c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.092 2 DEBUG oslo_concurrency.lockutils [req-d52c374a-27ef-4fe4-8d23-d187e80e0aff req-40bf7869-3ebb-4f25-a81e-62f38ae0fc8c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.092 2 DEBUG oslo_concurrency.lockutils [req-d52c374a-27ef-4fe4-8d23-d187e80e0aff req-40bf7869-3ebb-4f25-a81e-62f38ae0fc8c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.092 2 DEBUG nova.compute.manager [req-d52c374a-27ef-4fe4-8d23-d187e80e0aff req-40bf7869-3ebb-4f25-a81e-62f38ae0fc8c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] No waiting events found dispatching network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.093 2 WARNING nova.compute.manager [req-d52c374a-27ef-4fe4-8d23-d187e80e0aff req-40bf7869-3ebb-4f25-a81e-62f38ae0fc8c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received unexpected event network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f for instance with vm_state active and task_state None.
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.131 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.131 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.131 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.479 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.527 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.527 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.776 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.777 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4955MB free_disk=59.967525482177734GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.778 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.778 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3254798988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.828 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Instance e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.829 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.829 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:56:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:49.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.885 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:56:50 compute-1 ovn_controller[62080]: 2025-10-09T09:56:50Z|00041|binding|INFO|Releasing lport 57354100-1abc-4399-a76b-c42eaec1ad73 from this chassis (sb_readonly=0)
Oct 09 09:56:50 compute-1 NetworkManager[982]: <info>  [1760003810.2231] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct 09 09:56:50 compute-1 NetworkManager[982]: <info>  [1760003810.2238] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct 09 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:50 compute-1 ovn_controller[62080]: 2025-10-09T09:56:50Z|00042|binding|INFO|Releasing lport 57354100-1abc-4399-a76b-c42eaec1ad73 from this chassis (sb_readonly=0)
Oct 09 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.266 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.270 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.279 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.292 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.293 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.437 2 DEBUG nova.compute.manager [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-changed-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.437 2 DEBUG nova.compute.manager [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing instance network info cache due to event network-changed-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.438 2 DEBUG oslo_concurrency.lockutils [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.438 2 DEBUG oslo_concurrency.lockutils [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.438 2 DEBUG nova.network.neutron [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing network info cache for port 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 09:56:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:50.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:50 compute-1 ceph-mon[9795]: pgmap v709: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:56:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3169311562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1075486111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.293 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.294 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.294 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.294 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.294 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.295 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.431 2 DEBUG nova.network.neutron [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updated VIF entry in instance network info cache for port 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.432 2 DEBUG nova.network.neutron [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.445 2 DEBUG oslo_concurrency.lockutils [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:56:51 compute-1 sudo[166750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:56:51 compute-1 sudo[166750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:51 compute-1 sudo[166750]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:51 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2705111090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:51 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1845698345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:51.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:56:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:52.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:56:52 compute-1 ceph-mon[9795]: pgmap v710: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 09:56:52 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1583333975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.922 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.923 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.923 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 09 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.923 2 DEBUG nova.objects.instance [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:56:53 compute-1 podman[166776]: 2025-10-09 09:56:53.531597239 +0000 UTC m=+0.042017267 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:56:53 compute-1 podman[166777]: 2025-10-09 09:56:53.539242327 +0000 UTC m=+0.049662343 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:56:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:53.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:54.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:54 compute-1 nova_compute[162974]: 2025-10-09 09:56:54.835 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:56:54 compute-1 nova_compute[162974]: 2025-10-09 09:56:54.846 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:56:54 compute-1 nova_compute[162974]: 2025-10-09 09:56:54.847 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 09 09:56:54 compute-1 nova_compute[162974]: 2025-10-09 09:56:54.847 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:54 compute-1 ceph-mon[9795]: pgmap v711: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 09:56:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:55.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:56 compute-1 nova_compute[162974]: 2025-10-09 09:56:56.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:56.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:56 compute-1 ceph-mon[9795]: pgmap v712: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 09:56:57 compute-1 nova_compute[162974]: 2025-10-09 09:56:57.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:56:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:57.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:58.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:58 compute-1 ovn_controller[62080]: 2025-10-09T09:56:58Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:30:c8 10.100.0.7
Oct 09 09:56:58 compute-1 ovn_controller[62080]: 2025-10-09T09:56:58Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:30:c8 10.100.0.7
Oct 09 09:56:58 compute-1 ceph-mon[9795]: pgmap v713: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 99 op/s
Oct 09 09:56:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:56:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:59.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:00.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:00 compute-1 ceph-mon[9795]: pgmap v714: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 09 09:57:01 compute-1 nova_compute[162974]: 2025-10-09 09:57:01.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:01.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:02 compute-1 nova_compute[162974]: 2025-10-09 09:57:02.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:57:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:02.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:57:02 compute-1 ceph-mon[9795]: pgmap v715: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 09 09:57:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:03.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:04 compute-1 nova_compute[162974]: 2025-10-09 09:57:04.738 2 INFO nova.compute.manager [None req-1a299b68-b3a0-42cf-8d1a-4e75df6c02d4 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Get console output
Oct 09 09:57:04 compute-1 nova_compute[162974]: 2025-10-09 09:57:04.741 2 INFO oslo.privsep.daemon [None req-1a299b68-b3a0-42cf-8d1a-4e75df6c02d4 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpypcx92vx/privsep.sock']
Oct 09 09:57:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:04.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:04 compute-1 ceph-mon[9795]: pgmap v716: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 09:57:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:57:05 compute-1 nova_compute[162974]: 2025-10-09 09:57:05.278 2 INFO oslo.privsep.daemon [None req-1a299b68-b3a0-42cf-8d1a-4e75df6c02d4 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Spawned new privsep daemon via rootwrap
Oct 09 09:57:05 compute-1 nova_compute[162974]: 2025-10-09 09:57:05.194 1023 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 09:57:05 compute-1 nova_compute[162974]: 2025-10-09 09:57:05.198 1023 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 09:57:05 compute-1 nova_compute[162974]: 2025-10-09 09:57:05.199 1023 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 09 09:57:05 compute-1 nova_compute[162974]: 2025-10-09 09:57:05.199 1023 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1023
Oct 09 09:57:05 compute-1 nova_compute[162974]: 2025-10-09 09:57:05.353 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 09 09:57:05 compute-1 podman[166823]: 2025-10-09 09:57:05.54736587 +0000 UTC m=+0.058534425 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 09 09:57:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:05 compute-1 sudo[166846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:57:05 compute-1 sudo[166846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:57:05 compute-1 sudo[166846]: pam_unix(sudo:session): session closed for user root
Oct 09 09:57:05 compute-1 sudo[166871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:57:05 compute-1 sudo[166871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:57:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:05.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:06 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:06.085 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:57:06 compute-1 nova_compute[162974]: 2025-10-09 09:57:06.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:06 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:06.088 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 09:57:06 compute-1 nova_compute[162974]: 2025-10-09 09:57:06.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:06 compute-1 sudo[166871]: pam_unix(sudo:session): session closed for user root
Oct 09 09:57:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:06.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:06 compute-1 ceph-mon[9795]: pgmap v717: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 09 09:57:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:57:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:57:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:57:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:57:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:57:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:57:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:57:07 compute-1 nova_compute[162974]: 2025-10-09 09:57:07.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:07 compute-1 ceph-mon[9795]: pgmap v718: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct 09 09:57:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:07.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:08 compute-1 nova_compute[162974]: 2025-10-09 09:57:08.041 2 DEBUG oslo_concurrency.lockutils [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "interface-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:08 compute-1 nova_compute[162974]: 2025-10-09 09:57:08.041 2 DEBUG oslo_concurrency.lockutils [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:08 compute-1 nova_compute[162974]: 2025-10-09 09:57:08.042 2 DEBUG nova.objects.instance [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'flavor' on Instance uuid e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:57:08 compute-1 nova_compute[162974]: 2025-10-09 09:57:08.300 2 DEBUG nova.objects.instance [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_requests' on Instance uuid e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:57:08 compute-1 nova_compute[162974]: 2025-10-09 09:57:08.316 2 DEBUG nova.network.neutron [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 09 09:57:08 compute-1 nova_compute[162974]: 2025-10-09 09:57:08.441 2 DEBUG nova.policy [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 09 09:57:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:08.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:08 compute-1 nova_compute[162974]: 2025-10-09 09:57:08.776 2 DEBUG nova.network.neutron [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Successfully created port: 73007432-5bb0-435a-a871-05f59846a277 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 09 09:57:09 compute-1 nova_compute[162974]: 2025-10-09 09:57:09.278 2 DEBUG nova.network.neutron [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Successfully updated port: 73007432-5bb0-435a-a871-05f59846a277 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 09 09:57:09 compute-1 nova_compute[162974]: 2025-10-09 09:57:09.290 2 DEBUG oslo_concurrency.lockutils [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:57:09 compute-1 nova_compute[162974]: 2025-10-09 09:57:09.290 2 DEBUG oslo_concurrency.lockutils [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:57:09 compute-1 nova_compute[162974]: 2025-10-09 09:57:09.290 2 DEBUG nova.network.neutron [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 09 09:57:09 compute-1 nova_compute[162974]: 2025-10-09 09:57:09.341 2 DEBUG nova.compute.manager [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-changed-73007432-5bb0-435a-a871-05f59846a277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:57:09 compute-1 nova_compute[162974]: 2025-10-09 09:57:09.341 2 DEBUG nova.compute.manager [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing instance network info cache due to event network-changed-73007432-5bb0-435a-a871-05f59846a277. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 09:57:09 compute-1 nova_compute[162974]: 2025-10-09 09:57:09.341 2 DEBUG oslo_concurrency.lockutils [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:57:09 compute-1 ceph-mon[9795]: pgmap v719: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct 09 09:57:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:09.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:10.035 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:10.036 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:10.036 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:10 compute-1 sudo[166928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:57:10 compute-1 sudo[166928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:57:10 compute-1 sudo[166928]: pam_unix(sudo:session): session closed for user root
Oct 09 09:57:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:10.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:11 compute-1 nova_compute[162974]: 2025-10-09 09:57:11.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:57:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:57:11 compute-1 sudo[166954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:57:11 compute-1 sudo[166954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:57:11 compute-1 sudo[166954]: pam_unix(sudo:session): session closed for user root
Oct 09 09:57:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:11.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:12 compute-1 ceph-mon[9795]: pgmap v720: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct 09 09:57:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/3137381097' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:57:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/3137381097' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:57:12 compute-1 nova_compute[162974]: 2025-10-09 09:57:12.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:57:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:12.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.090 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.112 2 DEBUG nova.network.neutron [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.125 2 DEBUG oslo_concurrency.lockutils [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.125 2 DEBUG oslo_concurrency.lockutils [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.126 2 DEBUG nova.network.neutron [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing network info cache for port 73007432-5bb0-435a-a871-05f59846a277 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.128 2 DEBUG nova.virt.libvirt.vif [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-61543066',display_name='tempest-TestNetworkBasicOps-server-61543066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-61543066',id=3,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnE/dh71/I//FMlnppXDYKeeVJI2AqRfz3zTsFDUtMRPxSA9tfNCqu4Aqk04nGOjV/84C+cdkyXsPC0ZVfjXVfqYm026xBvCeeUUr4XUs/4snX/KNbtJXkvo3sUoZJ5aQ==',key_name='tempest-TestNetworkBasicOps-764715585',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:56:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-r33pvc0t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:56:47Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.129 2 DEBUG nova.network.os_vif_util [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.130 2 DEBUG nova.network.os_vif_util [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.131 2 DEBUG os_vif [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73007432-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73007432-5b, col_values=(('external_ids', {'iface-id': '73007432-5bb0-435a-a871-05f59846a277', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:96:4c', 'vm-uuid': 'e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:13 compute-1 NetworkManager[982]: <info>  [1760003833.1419] manager: (tap73007432-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.148 2 INFO os_vif [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b')
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.149 2 DEBUG nova.virt.libvirt.vif [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-61543066',display_name='tempest-TestNetworkBasicOps-server-61543066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-61543066',id=3,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnE/dh71/I//FMlnppXDYKeeVJI2AqRfz3zTsFDUtMRPxSA9tfNCqu4Aqk04nGOjV/84C+cdkyXsPC0ZVfjXVfqYm026xBvCeeUUr4XUs/4snX/KNbtJXkvo3sUoZJ5aQ==',key_name='tempest-TestNetworkBasicOps-764715585',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:56:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-r33pvc0t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:56:47Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.149 2 DEBUG nova.network.os_vif_util [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.150 2 DEBUG nova.network.os_vif_util [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.152 2 DEBUG nova.virt.libvirt.guest [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] attach device xml: <interface type="ethernet">
Oct 09 09:57:13 compute-1 nova_compute[162974]:   <mac address="fa:16:3e:78:96:4c"/>
Oct 09 09:57:13 compute-1 nova_compute[162974]:   <model type="virtio"/>
Oct 09 09:57:13 compute-1 nova_compute[162974]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 09:57:13 compute-1 nova_compute[162974]:   <mtu size="1442"/>
Oct 09 09:57:13 compute-1 nova_compute[162974]:   <target dev="tap73007432-5b"/>
Oct 09 09:57:13 compute-1 nova_compute[162974]: </interface>
Oct 09 09:57:13 compute-1 nova_compute[162974]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 09 09:57:13 compute-1 kernel: tap73007432-5b: entered promiscuous mode
Oct 09 09:57:13 compute-1 NetworkManager[982]: <info>  [1760003833.1625] manager: (tap73007432-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Oct 09 09:57:13 compute-1 ovn_controller[62080]: 2025-10-09T09:57:13Z|00043|binding|INFO|Claiming lport 73007432-5bb0-435a-a871-05f59846a277 for this chassis.
Oct 09 09:57:13 compute-1 ovn_controller[62080]: 2025-10-09T09:57:13Z|00044|binding|INFO|73007432-5bb0-435a-a871-05f59846a277: Claiming fa:16:3e:78:96:4c 10.100.0.19
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.171 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:96:4c 10.100.0.19'], port_security=['fa:16:3e:78:96:4c 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c62a73d-d0d8-493b-b929-9ae564924767', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '938aac20-7e1a-43e3-b950-0829bdd160e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e91cefc-5914-40f8-95c0-e51a38aae1ba, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=73007432-5bb0-435a-a871-05f59846a277) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.172 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 73007432-5bb0-435a-a871-05f59846a277 in datapath 3c62a73d-d0d8-493b-b929-9ae564924767 bound to our chassis
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.173 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c62a73d-d0d8-493b-b929-9ae564924767
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.182 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f6a882-5d33-4805-a625-55c59173d295]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.183 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c62a73d-d1 in ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.184 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c62a73d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.184 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[11da73fc-63d4-4e80-8019-1399bfe11457]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.185 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[d14dfd5d-3525-433e-bf42-c550c993410c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 systemd-udevd[166987]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.201 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[c106a9f5-b205-4dd4-a279-cd08d5bbfbe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:13 compute-1 ovn_controller[62080]: 2025-10-09T09:57:13Z|00045|binding|INFO|Setting lport 73007432-5bb0-435a-a871-05f59846a277 ovn-installed in OVS
Oct 09 09:57:13 compute-1 ovn_controller[62080]: 2025-10-09T09:57:13Z|00046|binding|INFO|Setting lport 73007432-5bb0-435a-a871-05f59846a277 up in Southbound
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:13 compute-1 NetworkManager[982]: <info>  [1760003833.2181] device (tap73007432-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:57:13 compute-1 NetworkManager[982]: <info>  [1760003833.2190] device (tap73007432-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.228 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[4156f9f4-a9cb-4ca8-bebc-1e9b93c764af]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.247 2 DEBUG nova.virt.libvirt.driver [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.247 2 DEBUG nova.virt.libvirt.driver [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.247 2 DEBUG nova.virt.libvirt.driver [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:4d:30:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.248 2 DEBUG nova.virt.libvirt.driver [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:78:96:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.259 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[74e8684c-2a0a-4b9d-9c24-5659db6b9f0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.262 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c0305df5-ff9f-4365-9774-312b7e19f498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 NetworkManager[982]: <info>  [1760003833.2632] manager: (tap3c62a73d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.266 2 DEBUG nova.virt.libvirt.guest [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 09:57:13 compute-1 nova_compute[162974]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 09:57:13 compute-1 nova_compute[162974]:   <nova:name>tempest-TestNetworkBasicOps-server-61543066</nova:name>
Oct 09 09:57:13 compute-1 nova_compute[162974]:   <nova:creationTime>2025-10-09 09:57:13</nova:creationTime>
Oct 09 09:57:13 compute-1 nova_compute[162974]:   <nova:flavor name="m1.nano">
Oct 09 09:57:13 compute-1 nova_compute[162974]:     <nova:memory>128</nova:memory>
Oct 09 09:57:13 compute-1 nova_compute[162974]:     <nova:disk>1</nova:disk>
Oct 09 09:57:13 compute-1 nova_compute[162974]:     <nova:swap>0</nova:swap>
Oct 09 09:57:13 compute-1 nova_compute[162974]:     <nova:ephemeral>0</nova:ephemeral>
Oct 09 09:57:13 compute-1 nova_compute[162974]:     <nova:vcpus>1</nova:vcpus>
Oct 09 09:57:13 compute-1 nova_compute[162974]:   </nova:flavor>
Oct 09 09:57:13 compute-1 nova_compute[162974]:   <nova:owner>
Oct 09 09:57:13 compute-1 nova_compute[162974]:     <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 09:57:13 compute-1 nova_compute[162974]:     <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 09:57:13 compute-1 nova_compute[162974]:   </nova:owner>
Oct 09 09:57:13 compute-1 nova_compute[162974]:   <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 09:57:13 compute-1 nova_compute[162974]:   <nova:ports>
Oct 09 09:57:13 compute-1 nova_compute[162974]:     <nova:port uuid="8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f">
Oct 09 09:57:13 compute-1 nova_compute[162974]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 09 09:57:13 compute-1 nova_compute[162974]:     </nova:port>
Oct 09 09:57:13 compute-1 nova_compute[162974]:     <nova:port uuid="73007432-5bb0-435a-a871-05f59846a277">
Oct 09 09:57:13 compute-1 nova_compute[162974]:       <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct 09 09:57:13 compute-1 nova_compute[162974]:     </nova:port>
Oct 09 09:57:13 compute-1 nova_compute[162974]:   </nova:ports>
Oct 09 09:57:13 compute-1 nova_compute[162974]: </nova:instance>
Oct 09 09:57:13 compute-1 nova_compute[162974]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.283 2 DEBUG oslo_concurrency.lockutils [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.292 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed522c2-36a0-4360-a051-39f90f4a642b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.293 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[12a18549-4203-4a01-acfa-8254ea70ed37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 NetworkManager[982]: <info>  [1760003833.3159] device (tap3c62a73d-d0): carrier: link connected
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.322 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1f87ce-f4f9-492a-bc6e-c5b2fb49a0bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.336 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa4f86b-1ac6-4e66-a25c-1cb362f23b7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c62a73d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:39:75'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 151688, 'reachable_time': 36095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 167004, 'error': None, 'target': 'ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.351 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[842d2586-fe4d-49ed-ab68-ef63010dafac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:3975'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151688, 'tstamp': 151688}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 167005, 'error': None, 'target': 'ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.365 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c5dbea62-1d0e-4ca1-9612-0905728299c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c62a73d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:39:75'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 151688, 'reachable_time': 36095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 167006, 'error': None, 'target': 'ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.395 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[e07d97df-e626-4580-811d-000fafa2ec04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.453 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[e385a408-28ca-45e8-a319-c705b4ec397e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.454 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c62a73d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.454 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.454 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c62a73d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:13 compute-1 NetworkManager[982]: <info>  [1760003833.4568] manager: (tap3c62a73d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 09 09:57:13 compute-1 kernel: tap3c62a73d-d0: entered promiscuous mode
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.460 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c62a73d-d0, col_values=(('external_ids', {'iface-id': 'b0e7930e-d821-45eb-a309-d4e5e2c7e0f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:13 compute-1 ovn_controller[62080]: 2025-10-09T09:57:13Z|00047|binding|INFO|Releasing lport b0e7930e-d821-45eb-a309-d4e5e2c7e0f3 from this chassis (sb_readonly=0)
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.475 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c62a73d-d0d8-493b-b929-9ae564924767.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c62a73d-d0d8-493b-b929-9ae564924767.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 09 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.476 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[2684d2fa-b981-44ba-ae61-3d77e570824a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.476 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: global
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     log         /dev/log local0 debug
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     log-tag     haproxy-metadata-proxy-3c62a73d-d0d8-493b-b929-9ae564924767
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     user        root
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     group       root
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     maxconn     1024
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     pidfile     /var/lib/neutron/external/pids/3c62a73d-d0d8-493b-b929-9ae564924767.pid.haproxy
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     daemon
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: defaults
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     log global
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     mode http
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     option httplog
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     option dontlognull
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     option http-server-close
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     option forwardfor
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     retries                 3
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     timeout http-request    30s
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     timeout connect         30s
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     timeout client          32s
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     timeout server          32s
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     timeout http-keep-alive 30s
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: listen listener
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     bind 169.254.169.254:80
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:     http-request add-header X-OVN-Network-ID 3c62a73d-d0d8-493b-b929-9ae564924767
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 09 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.477 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767', 'env', 'PROCESS_TAG=haproxy-3c62a73d-d0d8-493b-b929-9ae564924767', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c62a73d-d0d8-493b-b929-9ae564924767.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 09 09:57:13 compute-1 podman[167036]: 2025-10-09 09:57:13.775274883 +0000 UTC m=+0.042232844 container create bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 09 09:57:13 compute-1 systemd[1]: Started libpod-conmon-bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8.scope.
Oct 09 09:57:13 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:57:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d101d488314606cd0605ba833244519bc1b143a95501ac83d26953afcdb780b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 09:57:13 compute-1 podman[167036]: 2025-10-09 09:57:13.83640896 +0000 UTC m=+0.103366920 container init bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:57:13 compute-1 podman[167036]: 2025-10-09 09:57:13.842270624 +0000 UTC m=+0.109228584 container start bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 09 09:57:13 compute-1 podman[167036]: 2025-10-09 09:57:13.760293041 +0000 UTC m=+0.027251000 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 09:57:13 compute-1 neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767[167047]: [NOTICE]   (167052) : New worker (167054) forked
Oct 09 09:57:13 compute-1 neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767[167047]: [NOTICE]   (167052) : Loading success.
Oct 09 09:57:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:13.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:14 compute-1 ceph-mon[9795]: pgmap v721: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 18 KiB/s wr, 2 op/s
Oct 09 09:57:14 compute-1 nova_compute[162974]: 2025-10-09 09:57:14.257 2 DEBUG nova.compute.manager [req-fd68a8dc-770d-4f15-8f75-c2b67ea505fc req-6777918a-22a6-4acf-ac58-ba07499b9d9d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:57:14 compute-1 nova_compute[162974]: 2025-10-09 09:57:14.257 2 DEBUG oslo_concurrency.lockutils [req-fd68a8dc-770d-4f15-8f75-c2b67ea505fc req-6777918a-22a6-4acf-ac58-ba07499b9d9d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:14 compute-1 nova_compute[162974]: 2025-10-09 09:57:14.258 2 DEBUG oslo_concurrency.lockutils [req-fd68a8dc-770d-4f15-8f75-c2b67ea505fc req-6777918a-22a6-4acf-ac58-ba07499b9d9d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:14 compute-1 nova_compute[162974]: 2025-10-09 09:57:14.258 2 DEBUG oslo_concurrency.lockutils [req-fd68a8dc-770d-4f15-8f75-c2b67ea505fc req-6777918a-22a6-4acf-ac58-ba07499b9d9d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:14 compute-1 nova_compute[162974]: 2025-10-09 09:57:14.258 2 DEBUG nova.compute.manager [req-fd68a8dc-770d-4f15-8f75-c2b67ea505fc req-6777918a-22a6-4acf-ac58-ba07499b9d9d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] No waiting events found dispatching network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:57:14 compute-1 nova_compute[162974]: 2025-10-09 09:57:14.258 2 WARNING nova.compute.manager [req-fd68a8dc-770d-4f15-8f75-c2b67ea505fc req-6777918a-22a6-4acf-ac58-ba07499b9d9d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received unexpected event network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 for instance with vm_state active and task_state None.
Oct 09 09:57:14 compute-1 ovn_controller[62080]: 2025-10-09T09:57:14Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:96:4c 10.100.0.19
Oct 09 09:57:14 compute-1 ovn_controller[62080]: 2025-10-09T09:57:14Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:96:4c 10.100.0.19
Oct 09 09:57:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:14.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.156 2 DEBUG oslo_concurrency.lockutils [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "interface-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-73007432-5bb0-435a-a871-05f59846a277" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.156 2 DEBUG oslo_concurrency.lockutils [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-73007432-5bb0-435a-a871-05f59846a277" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.171 2 DEBUG nova.objects.instance [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'flavor' on Instance uuid e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.184 2 DEBUG nova.virt.libvirt.vif [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-61543066',display_name='tempest-TestNetworkBasicOps-server-61543066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-61543066',id=3,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnE/dh71/I//FMlnppXDYKeeVJI2AqRfz3zTsFDUtMRPxSA9tfNCqu4Aqk04nGOjV/84C+cdkyXsPC0ZVfjXVfqYm026xBvCeeUUr4XUs/4snX/KNbtJXkvo3sUoZJ5aQ==',key_name='tempest-TestNetworkBasicOps-764715585',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:56:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-r33pvc0t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:56:47Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.185 2 DEBUG nova.network.os_vif_util [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.185 2 DEBUG nova.network.os_vif_util [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.188 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:78:96:4c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap73007432-5b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.190 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:78:96:4c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap73007432-5b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.192 2 DEBUG nova.virt.libvirt.driver [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Attempting to detach device tap73007432-5b from instance e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.192 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] detach device xml: <interface type="ethernet">
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <mac address="fa:16:3e:78:96:4c"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <model type="virtio"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <mtu size="1442"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <target dev="tap73007432-5b"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]: </interface>
Oct 09 09:57:15 compute-1 nova_compute[162974]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.196 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:78:96:4c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap73007432-5b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.199 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:78:96:4c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap73007432-5b"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <name>instance-00000003</name>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <uuid>e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</uuid>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <metadata>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:name>tempest-TestNetworkBasicOps-server-61543066</nova:name>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:creationTime>2025-10-09 09:57:13</nova:creationTime>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:flavor name="m1.nano">
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:memory>128</nova:memory>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:disk>1</nova:disk>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:swap>0</nova:swap>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:ephemeral>0</nova:ephemeral>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:vcpus>1</nova:vcpus>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </nova:flavor>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:owner>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </nova:owner>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:ports>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:port uuid="8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f">
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </nova:port>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:port uuid="73007432-5bb0-435a-a871-05f59846a277">
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </nova:port>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </nova:ports>
Oct 09 09:57:15 compute-1 nova_compute[162974]: </nova:instance>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </metadata>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <memory unit='KiB'>131072</memory>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <vcpu placement='static'>1</vcpu>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <resource>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <partition>/machine</partition>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </resource>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <sysinfo type='smbios'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <system>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <entry name='manufacturer'>RDO</entry>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <entry name='product'>OpenStack Compute</entry>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <entry name='serial'>e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</entry>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <entry name='uuid'>e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</entry>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <entry name='family'>Virtual Machine</entry>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </system>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </sysinfo>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <os>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <boot dev='hd'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <smbios mode='sysinfo'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </os>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <features>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <acpi/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <apic/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <vmcoreinfo state='on'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </features>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <cpu mode='custom' match='exact' check='full'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <vendor>AMD</vendor>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='x2apic'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='tsc-deadline'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='hypervisor'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='tsc_adjust'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='vaes'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='spec-ctrl'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='stibp'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='arch-capabilities'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='ssbd'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='cmp_legacy'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='overflow-recov'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='succor'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='virt-ssbd'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='lbrv'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='tsc-scale'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='vmcb-clean'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='flushbyasid'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='pause-filter'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='pfthreshold'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='v-vmsave-vmload'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='vgif'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='rdctl-no'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='mds-no'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='gds-no'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='rfds-no'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='svm'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='topoext'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='npt'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='nrip-save'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </cpu>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <clock offset='utc'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <timer name='pit' tickpolicy='delay'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <timer name='hpet' present='no'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </clock>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <on_poweroff>destroy</on_poweroff>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <on_reboot>restart</on_reboot>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <on_crash>destroy</on_crash>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <devices>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <disk type='network' device='disk'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <driver name='qemu' type='raw' cache='none'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <auth username='openstack'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       </auth>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <source protocol='rbd' name='vms/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk' index='2'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <host name='192.168.122.100' port='6789'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <host name='192.168.122.102' port='6789'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <host name='192.168.122.101' port='6789'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       </source>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target dev='vda' bus='virtio'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='virtio-disk0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <disk type='network' device='cdrom'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <driver name='qemu' type='raw' cache='none'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <auth username='openstack'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       </auth>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <source protocol='rbd' name='vms/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config' index='1'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <host name='192.168.122.100' port='6789'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <host name='192.168.122.102' port='6789'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <host name='192.168.122.101' port='6789'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       </source>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target dev='sda' bus='sata'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <readonly/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='sata0-0-0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='0' model='pcie-root'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pcie.0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='1' port='0x10'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='2' port='0x11'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.2'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='3' port='0x12'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.3'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='4' port='0x13'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.4'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='5' port='0x14'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.5'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='6' port='0x15'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.6'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='7' port='0x16'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.7'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='8' port='0x17'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.8'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='9' port='0x18'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.9'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='10' port='0x19'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.10'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='11' port='0x1a'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.11'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='12' port='0x1b'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.12'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='13' port='0x1c'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.13'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='14' port='0x1d'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.14'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='15' port='0x1e'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.15'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='16' port='0x1f'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.16'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='17' port='0x20'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.17'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='18' port='0x21'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.18'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='19' port='0x22'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.19'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='20' port='0x23'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.20'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='21' port='0x24'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.21'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='22' port='0x25'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.22'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='23' port='0x26'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.23'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='24' port='0x27'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.24'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='25' port='0x28'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.25'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-pci-bridge'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.26'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='usb'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='sata' index='0'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='ide'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <interface type='ethernet'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <mac address='fa:16:3e:4d:30:c8'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target dev='tap8d2d29b3-65'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model type='virtio'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <driver name='vhost' rx_queue_size='512'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <mtu size='1442'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='net0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </interface>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <interface type='ethernet'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <mac address='fa:16:3e:78:96:4c'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target dev='tap73007432-5b'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model type='virtio'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <driver name='vhost' rx_queue_size='512'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <mtu size='1442'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='net1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </interface>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <serial type='pty'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <source path='/dev/pts/0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <log file='/var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/console.log' append='off'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target type='isa-serial' port='0'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <model name='isa-serial'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       </target>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='serial0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </serial>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <console type='pty' tty='/dev/pts/0'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <source path='/dev/pts/0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <log file='/var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/console.log' append='off'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target type='serial' port='0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='serial0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </console>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <input type='tablet' bus='usb'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='input0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='usb' bus='0' port='1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </input>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <input type='mouse' bus='ps2'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='input1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </input>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <input type='keyboard' bus='ps2'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='input2'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </input>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <listen type='address' address='::0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </graphics>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <audio id='1' type='none'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <video>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model type='virtio' heads='1' primary='yes'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='video0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </video>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <watchdog model='itco' action='reset'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='watchdog0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </watchdog>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <memballoon model='virtio'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <stats period='10'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='balloon0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </memballoon>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <rng model='virtio'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <backend model='random'>/dev/urandom</backend>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='rng0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </rng>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </devices>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <label>system_u:system_r:svirt_t:s0:c214,c252</label>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c214,c252</imagelabel>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </seclabel>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <label>+107:+107</label>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <imagelabel>+107:+107</imagelabel>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </seclabel>
Oct 09 09:57:15 compute-1 nova_compute[162974]: </domain>
Oct 09 09:57:15 compute-1 nova_compute[162974]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.199 2 INFO nova.virt.libvirt.driver [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully detached device tap73007432-5b from instance e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 from the persistent domain config.
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.199 2 DEBUG nova.virt.libvirt.driver [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] (1/8): Attempting to detach device tap73007432-5b with device alias net1 from instance e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.199 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] detach device xml: <interface type="ethernet">
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <mac address="fa:16:3e:78:96:4c"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <model type="virtio"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <mtu size="1442"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <target dev="tap73007432-5b"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]: </interface>
Oct 09 09:57:15 compute-1 nova_compute[162974]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.219 2 DEBUG nova.network.neutron [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updated VIF entry in instance network info cache for port 73007432-5bb0-435a-a871-05f59846a277. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.220 2 DEBUG nova.network.neutron [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.229 2 DEBUG oslo_concurrency.lockutils [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:57:15 compute-1 kernel: tap73007432-5b (unregistering): left promiscuous mode
Oct 09 09:57:15 compute-1 NetworkManager[982]: <info>  [1760003835.2932] device (tap73007432-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 09:57:15 compute-1 ovn_controller[62080]: 2025-10-09T09:57:15Z|00048|binding|INFO|Releasing lport 73007432-5bb0-435a-a871-05f59846a277 from this chassis (sb_readonly=0)
Oct 09 09:57:15 compute-1 ovn_controller[62080]: 2025-10-09T09:57:15Z|00049|binding|INFO|Setting lport 73007432-5bb0-435a-a871-05f59846a277 down in Southbound
Oct 09 09:57:15 compute-1 ovn_controller[62080]: 2025-10-09T09:57:15Z|00050|binding|INFO|Removing iface tap73007432-5b ovn-installed in OVS
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.306 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:96:4c 10.100.0.19'], port_security=['fa:16:3e:78:96:4c 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c62a73d-d0d8-493b-b929-9ae564924767', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '938aac20-7e1a-43e3-b950-0829bdd160e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e91cefc-5914-40f8-95c0-e51a38aae1ba, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=73007432-5bb0-435a-a871-05f59846a277) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.307 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 73007432-5bb0-435a-a871-05f59846a277 in datapath 3c62a73d-d0d8-493b-b929-9ae564924767 unbound from our chassis
Oct 09 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.308 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c62a73d-d0d8-493b-b929-9ae564924767, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 09 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.309 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[db7d1b18-c53e-4e32-bcf4-b32c0b74c8e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.309 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767 namespace which is not needed anymore
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.309 2 DEBUG nova.virt.libvirt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Received event <DeviceRemovedEvent: 1760003835.3093555, e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.312 2 DEBUG nova.virt.libvirt.driver [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Start waiting for the detach event from libvirt for device tap73007432-5b with device alias net1 for instance e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.312 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:78:96:4c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap73007432-5b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.315 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:78:96:4c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap73007432-5b"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <name>instance-00000003</name>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <uuid>e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</uuid>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <metadata>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:name>tempest-TestNetworkBasicOps-server-61543066</nova:name>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:creationTime>2025-10-09 09:57:13</nova:creationTime>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:flavor name="m1.nano">
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:memory>128</nova:memory>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:disk>1</nova:disk>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:swap>0</nova:swap>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:ephemeral>0</nova:ephemeral>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:vcpus>1</nova:vcpus>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </nova:flavor>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:owner>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </nova:owner>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:ports>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:port uuid="8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f">
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </nova:port>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:port uuid="73007432-5bb0-435a-a871-05f59846a277">
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </nova:port>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </nova:ports>
Oct 09 09:57:15 compute-1 nova_compute[162974]: </nova:instance>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </metadata>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <memory unit='KiB'>131072</memory>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <vcpu placement='static'>1</vcpu>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <resource>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <partition>/machine</partition>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </resource>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <sysinfo type='smbios'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <system>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <entry name='manufacturer'>RDO</entry>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <entry name='product'>OpenStack Compute</entry>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <entry name='serial'>e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</entry>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <entry name='uuid'>e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</entry>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <entry name='family'>Virtual Machine</entry>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </system>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </sysinfo>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <os>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <boot dev='hd'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <smbios mode='sysinfo'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </os>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <features>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <acpi/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <apic/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <vmcoreinfo state='on'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </features>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <cpu mode='custom' match='exact' check='full'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <vendor>AMD</vendor>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='x2apic'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='tsc-deadline'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='hypervisor'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='tsc_adjust'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='vaes'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='spec-ctrl'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='stibp'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='arch-capabilities'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='ssbd'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='cmp_legacy'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='overflow-recov'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='succor'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='virt-ssbd'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='lbrv'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='tsc-scale'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='vmcb-clean'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='flushbyasid'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='pause-filter'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='pfthreshold'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='v-vmsave-vmload'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='vgif'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='rdctl-no'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='mds-no'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='gds-no'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='rfds-no'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='svm'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='require' name='topoext'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='npt'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='nrip-save'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </cpu>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <clock offset='utc'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <timer name='pit' tickpolicy='delay'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <timer name='hpet' present='no'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </clock>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <on_poweroff>destroy</on_poweroff>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <on_reboot>restart</on_reboot>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <on_crash>destroy</on_crash>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <devices>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <disk type='network' device='disk'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <driver name='qemu' type='raw' cache='none'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <auth username='openstack'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       </auth>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <source protocol='rbd' name='vms/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk' index='2'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <host name='192.168.122.100' port='6789'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <host name='192.168.122.102' port='6789'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <host name='192.168.122.101' port='6789'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       </source>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target dev='vda' bus='virtio'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='virtio-disk0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <disk type='network' device='cdrom'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <driver name='qemu' type='raw' cache='none'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <auth username='openstack'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       </auth>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <source protocol='rbd' name='vms/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config' index='1'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <host name='192.168.122.100' port='6789'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <host name='192.168.122.102' port='6789'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <host name='192.168.122.101' port='6789'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       </source>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target dev='sda' bus='sata'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <readonly/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='sata0-0-0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='0' model='pcie-root'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pcie.0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='1' port='0x10'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='2' port='0x11'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.2'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='3' port='0x12'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.3'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='4' port='0x13'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.4'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='5' port='0x14'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.5'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='6' port='0x15'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.6'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='7' port='0x16'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.7'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='8' port='0x17'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.8'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='9' port='0x18'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.9'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='10' port='0x19'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.10'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='11' port='0x1a'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.11'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='12' port='0x1b'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.12'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='13' port='0x1c'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.13'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='14' port='0x1d'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.14'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='15' port='0x1e'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.15'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='16' port='0x1f'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.16'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='17' port='0x20'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.17'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='18' port='0x21'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.18'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='19' port='0x22'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.19'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='20' port='0x23'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.20'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='21' port='0x24'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.21'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='22' port='0x25'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.22'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='23' port='0x26'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.23'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='24' port='0x27'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.24'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-root-port'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target chassis='25' port='0x28'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.25'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model name='pcie-pci-bridge'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='pci.26'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='usb'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <controller type='sata' index='0'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='ide'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </controller>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <interface type='ethernet'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <mac address='fa:16:3e:4d:30:c8'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target dev='tap8d2d29b3-65'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model type='virtio'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <driver name='vhost' rx_queue_size='512'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <mtu size='1442'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='net0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </interface>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <serial type='pty'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <source path='/dev/pts/0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <log file='/var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/console.log' append='off'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target type='isa-serial' port='0'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:         <model name='isa-serial'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       </target>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='serial0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </serial>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <console type='pty' tty='/dev/pts/0'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <source path='/dev/pts/0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <log file='/var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/console.log' append='off'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <target type='serial' port='0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='serial0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </console>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <input type='tablet' bus='usb'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='input0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='usb' bus='0' port='1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </input>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <input type='mouse' bus='ps2'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='input1'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </input>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <input type='keyboard' bus='ps2'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='input2'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </input>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <listen type='address' address='::0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </graphics>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <audio id='1' type='none'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <video>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <model type='virtio' heads='1' primary='yes'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='video0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </video>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <watchdog model='itco' action='reset'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='watchdog0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </watchdog>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <memballoon model='virtio'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <stats period='10'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='balloon0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </memballoon>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <rng model='virtio'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <backend model='random'>/dev/urandom</backend>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <alias name='rng0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </rng>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </devices>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <label>system_u:system_r:svirt_t:s0:c214,c252</label>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c214,c252</imagelabel>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </seclabel>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <label>+107:+107</label>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <imagelabel>+107:+107</imagelabel>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </seclabel>
Oct 09 09:57:15 compute-1 nova_compute[162974]: </domain>
Oct 09 09:57:15 compute-1 nova_compute[162974]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.317 2 INFO nova.virt.libvirt.driver [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully detached device tap73007432-5b from instance e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 from the live domain config.
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.317 2 DEBUG nova.virt.libvirt.vif [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-61543066',display_name='tempest-TestNetworkBasicOps-server-61543066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-61543066',id=3,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnE/dh71/I//FMlnppXDYKeeVJI2AqRfz3zTsFDUtMRPxSA9tfNCqu4Aqk04nGOjV/84C+cdkyXsPC0ZVfjXVfqYm026xBvCeeUUr4XUs/4snX/KNbtJXkvo3sUoZJ5aQ==',key_name='tempest-TestNetworkBasicOps-764715585',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:56:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-r33pvc0t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:56:47Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.318 2 DEBUG nova.network.os_vif_util [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.318 2 DEBUG nova.network.os_vif_util [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.319 2 DEBUG os_vif [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.320 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73007432-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.328 2 INFO os_vif [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b')
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.328 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:name>tempest-TestNetworkBasicOps-server-61543066</nova:name>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:creationTime>2025-10-09 09:57:15</nova:creationTime>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:flavor name="m1.nano">
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:memory>128</nova:memory>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:disk>1</nova:disk>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:swap>0</nova:swap>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:ephemeral>0</nova:ephemeral>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:vcpus>1</nova:vcpus>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </nova:flavor>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:owner>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </nova:owner>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   <nova:ports>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     <nova:port uuid="8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f">
Oct 09 09:57:15 compute-1 nova_compute[162974]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 09 09:57:15 compute-1 nova_compute[162974]:     </nova:port>
Oct 09 09:57:15 compute-1 nova_compute[162974]:   </nova:ports>
Oct 09 09:57:15 compute-1 nova_compute[162974]: </nova:instance>
Oct 09 09:57:15 compute-1 nova_compute[162974]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 09 09:57:15 compute-1 neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767[167047]: [NOTICE]   (167052) : haproxy version is 2.8.14-c23fe91
Oct 09 09:57:15 compute-1 neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767[167047]: [NOTICE]   (167052) : path to executable is /usr/sbin/haproxy
Oct 09 09:57:15 compute-1 neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767[167047]: [WARNING]  (167052) : Exiting Master process...
Oct 09 09:57:15 compute-1 neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767[167047]: [ALERT]    (167052) : Current worker (167054) exited with code 143 (Terminated)
Oct 09 09:57:15 compute-1 neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767[167047]: [WARNING]  (167052) : All workers exited. Exiting... (0)
Oct 09 09:57:15 compute-1 systemd[1]: libpod-bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8.scope: Deactivated successfully.
Oct 09 09:57:15 compute-1 podman[167078]: 2025-10-09 09:57:15.428548202 +0000 UTC m=+0.035297926 container stop bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:57:15 compute-1 conmon[167047]: conmon bd2151fd7758620b1ade <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8.scope/container/memory.events
Oct 09 09:57:15 compute-1 podman[167078]: 2025-10-09 09:57:15.43405669 +0000 UTC m=+0.040806453 container died bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:57:15 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8-userdata-shm.mount: Deactivated successfully.
Oct 09 09:57:15 compute-1 systemd[1]: var-lib-containers-storage-overlay-9d101d488314606cd0605ba833244519bc1b143a95501ac83d26953afcdb780b-merged.mount: Deactivated successfully.
Oct 09 09:57:15 compute-1 podman[167078]: 2025-10-09 09:57:15.462833797 +0000 UTC m=+0.069583520 container cleanup bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 09 09:57:15 compute-1 systemd[1]: libpod-conmon-bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8.scope: Deactivated successfully.
Oct 09 09:57:15 compute-1 podman[167103]: 2025-10-09 09:57:15.526136644 +0000 UTC m=+0.036002676 container remove bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.532 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[714b36ea-eeda-402e-b9a9-db88f36d0ebb]: (4, ('Thu Oct  9 09:57:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767 (bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8)\nbd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8\nThu Oct  9 09:57:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767 (bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8)\nbd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.534 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[40f5f5e0-9a57-456c-a40e-dac1c9f9eecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.535 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c62a73d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:15 compute-1 kernel: tap3c62a73d-d0: left promiscuous mode
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.554 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6faeddc0-a52e-4d24-bd5b-a8cdae1e285a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.575 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c0755656-f3eb-4e78-a3e4-753c780a0e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.576 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c4445cb1-c35f-4076-9342-6b47bcdf7402]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.591 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[db67513d-e449-4826-b8c8-3a38629f9d17]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 151682, 'reachable_time': 42324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 167121, 'error': None, 'target': 'ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:15 compute-1 systemd[1]: run-netns-ovnmeta\x2d3c62a73d\x2dd0d8\x2d493b\x2db929\x2d9ae564924767.mount: Deactivated successfully.
Oct 09 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.595 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 09 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.595 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[b2822fa7-2ee2-479c-8405-97dbc16b28c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:15 compute-1 podman[167112]: 2025-10-09 09:57:15.665502009 +0000 UTC m=+0.088024717 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 09 09:57:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:15.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:16 compute-1 ceph-mon[9795]: pgmap v722: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 18 KiB/s wr, 2 op/s
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.311 2 DEBUG oslo_concurrency.lockutils [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.311 2 DEBUG oslo_concurrency.lockutils [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.312 2 DEBUG nova.network.neutron [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.330 2 DEBUG nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.330 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.330 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.331 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.331 2 DEBUG nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] No waiting events found dispatching network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.331 2 WARNING nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received unexpected event network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 for instance with vm_state active and task_state None.
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.331 2 DEBUG nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-unplugged-73007432-5bb0-435a-a871-05f59846a277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.331 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.332 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.332 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.332 2 DEBUG nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] No waiting events found dispatching network-vif-unplugged-73007432-5bb0-435a-a871-05f59846a277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.332 2 WARNING nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received unexpected event network-vif-unplugged-73007432-5bb0-435a-a871-05f59846a277 for instance with vm_state active and task_state None.
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.332 2 DEBUG nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.333 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.333 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.333 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.333 2 DEBUG nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] No waiting events found dispatching network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.333 2 WARNING nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received unexpected event network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 for instance with vm_state active and task_state None.
Oct 09 09:57:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:57:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:16.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:57:17 compute-1 nova_compute[162974]: 2025-10-09 09:57:17.630 2 INFO nova.network.neutron [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Port 73007432-5bb0-435a-a871-05f59846a277 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 09 09:57:17 compute-1 nova_compute[162974]: 2025-10-09 09:57:17.630 2 DEBUG nova.network.neutron [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:57:17 compute-1 nova_compute[162974]: 2025-10-09 09:57:17.642 2 DEBUG oslo_concurrency.lockutils [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:57:17 compute-1 nova_compute[162974]: 2025-10-09 09:57:17.656 2 DEBUG oslo_concurrency.lockutils [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-73007432-5bb0-435a-a871-05f59846a277" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:17 compute-1 nova_compute[162974]: 2025-10-09 09:57:17.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:17.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:18 compute-1 ovn_controller[62080]: 2025-10-09T09:57:18Z|00051|binding|INFO|Releasing lport 57354100-1abc-4399-a76b-c42eaec1ad73 from this chassis (sb_readonly=0)
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:18 compute-1 ceph-mon[9795]: pgmap v723: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s rd, 6.4 KiB/s wr, 2 op/s
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.418 2 DEBUG nova.compute.manager [req-7445ef2f-eba9-4340-83cb-6470982e4f3f req-3eb1609a-c1f0-45b0-a978-7fea2bb9192f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-deleted-73007432-5bb0-435a-a871-05f59846a277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.535 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.535 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.536 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.536 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.536 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.538 2 INFO nova.compute.manager [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Terminating instance
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.539 2 DEBUG nova.compute.manager [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 09 09:57:18 compute-1 kernel: tap8d2d29b3-65 (unregistering): left promiscuous mode
Oct 09 09:57:18 compute-1 NetworkManager[982]: <info>  [1760003838.5755] device (tap8d2d29b3-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:18 compute-1 ovn_controller[62080]: 2025-10-09T09:57:18Z|00052|binding|INFO|Releasing lport 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f from this chassis (sb_readonly=0)
Oct 09 09:57:18 compute-1 ovn_controller[62080]: 2025-10-09T09:57:18Z|00053|binding|INFO|Setting lport 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f down in Southbound
Oct 09 09:57:18 compute-1 ovn_controller[62080]: 2025-10-09T09:57:18Z|00054|binding|INFO|Removing iface tap8d2d29b3-65 ovn-installed in OVS
Oct 09 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.588 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:30:c8 10.100.0.7'], port_security=['fa:16:3e:4d:30:c8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26c660ed-37e9-4f44-b603-3901342edf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fd66aef-c4b5-4f4c-ae18-6ccc210d224e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4edfdfe9-a5ca-4224-9930-4324a48b984f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.589 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f in datapath 26c660ed-37e9-4f44-b603-3901342edf9b unbound from our chassis
Oct 09 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.590 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26c660ed-37e9-4f44-b603-3901342edf9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 09 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.591 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[889646e7-a501-4582-9582-e226039fb146]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.593 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b namespace which is not needed anymore
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:18 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 09 09:57:18 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 11.310s CPU time.
Oct 09 09:57:18 compute-1 systemd-machined[120683]: Machine qemu-2-instance-00000003 terminated.
Oct 09 09:57:18 compute-1 neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b[166691]: [NOTICE]   (166695) : haproxy version is 2.8.14-c23fe91
Oct 09 09:57:18 compute-1 neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b[166691]: [NOTICE]   (166695) : path to executable is /usr/sbin/haproxy
Oct 09 09:57:18 compute-1 neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b[166691]: [WARNING]  (166695) : Exiting Master process...
Oct 09 09:57:18 compute-1 neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b[166691]: [ALERT]    (166695) : Current worker (166697) exited with code 143 (Terminated)
Oct 09 09:57:18 compute-1 neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b[166691]: [WARNING]  (166695) : All workers exited. Exiting... (0)
Oct 09 09:57:18 compute-1 systemd[1]: libpod-e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c.scope: Deactivated successfully.
Oct 09 09:57:18 compute-1 podman[167154]: 2025-10-09 09:57:18.709027252 +0000 UTC m=+0.034838489 container died e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 09 09:57:18 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c-userdata-shm.mount: Deactivated successfully.
Oct 09 09:57:18 compute-1 systemd[1]: var-lib-containers-storage-overlay-08a45d62be4bd9ce9df4641fb90075d2091d46c2e93ad8f4010bfee1112d2e50-merged.mount: Deactivated successfully.
Oct 09 09:57:18 compute-1 podman[167154]: 2025-10-09 09:57:18.733307711 +0000 UTC m=+0.059118947 container cleanup e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 09 09:57:18 compute-1 systemd[1]: libpod-conmon-e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c.scope: Deactivated successfully.
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.767 2 INFO nova.virt.libvirt.driver [-] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Instance destroyed successfully.
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.768 2 DEBUG nova.objects.instance [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:57:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:57:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:18.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.783 2 DEBUG nova.virt.libvirt.vif [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-61543066',display_name='tempest-TestNetworkBasicOps-server-61543066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-61543066',id=3,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnE/dh71/I//FMlnppXDYKeeVJI2AqRfz3zTsFDUtMRPxSA9tfNCqu4Aqk04nGOjV/84C+cdkyXsPC0ZVfjXVfqYm026xBvCeeUUr4XUs/4snX/KNbtJXkvo3sUoZJ5aQ==',key_name='tempest-TestNetworkBasicOps-764715585',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:56:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-r33pvc0t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:56:47Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.784 2 DEBUG nova.network.os_vif_util [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.785 2 DEBUG nova.network.os_vif_util [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:30:c8,bridge_name='br-int',has_traffic_filtering=True,id=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f,network=Network(26c660ed-37e9-4f44-b603-3901342edf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d2d29b3-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.785 2 DEBUG os_vif [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:30:c8,bridge_name='br-int',has_traffic_filtering=True,id=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f,network=Network(26c660ed-37e9-4f44-b603-3901342edf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d2d29b3-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:18 compute-1 podman[167178]: 2025-10-09 09:57:18.789220061 +0000 UTC m=+0.033364409 container remove e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.789 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d2d29b3-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.798 2 INFO os_vif [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:30:c8,bridge_name='br-int',has_traffic_filtering=True,id=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f,network=Network(26c660ed-37e9-4f44-b603-3901342edf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d2d29b3-65')
Oct 09 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.795 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd3fa96-326b-4a0e-8760-729fe3a85c1e]: (4, ('Thu Oct  9 09:57:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b (e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c)\ne8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c\nThu Oct  9 09:57:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b (e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c)\ne8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.800 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f6df52-84de-44b3-b37b-371e2828c45d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.801 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26c660ed-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:18 compute-1 kernel: tap26c660ed-30: left promiscuous mode
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.820 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c52b2619-db31-4917-b79c-43a02e470b73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.825 2 DEBUG nova.compute.manager [req-f743ec9e-07e9-4765-8aee-b6cba8e7eed8 req-ea22a009-54ee-47f8-a890-496b9bd01fa3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-unplugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.826 2 DEBUG oslo_concurrency.lockutils [req-f743ec9e-07e9-4765-8aee-b6cba8e7eed8 req-ea22a009-54ee-47f8-a890-496b9bd01fa3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.826 2 DEBUG oslo_concurrency.lockutils [req-f743ec9e-07e9-4765-8aee-b6cba8e7eed8 req-ea22a009-54ee-47f8-a890-496b9bd01fa3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.826 2 DEBUG oslo_concurrency.lockutils [req-f743ec9e-07e9-4765-8aee-b6cba8e7eed8 req-ea22a009-54ee-47f8-a890-496b9bd01fa3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.826 2 DEBUG nova.compute.manager [req-f743ec9e-07e9-4765-8aee-b6cba8e7eed8 req-ea22a009-54ee-47f8-a890-496b9bd01fa3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] No waiting events found dispatching network-vif-unplugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.826 2 DEBUG nova.compute.manager [req-f743ec9e-07e9-4765-8aee-b6cba8e7eed8 req-ea22a009-54ee-47f8-a890-496b9bd01fa3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-unplugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 09 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.840 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6d70eb01-fd37-40f6-8254-ea6487705924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.841 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a34418-1198-421e-82f2-fea7612ee299]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.857 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[da5c976d-bff2-444c-903b-7da7e85b973f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 149041, 'reachable_time': 41097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 167219, 'error': None, 'target': 'ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:18 compute-1 systemd[1]: run-netns-ovnmeta\x2d26c660ed\x2d37e9\x2d4f44\x2db603\x2d3901342edf9b.mount: Deactivated successfully.
Oct 09 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.860 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 09 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.861 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[8f4bae9c-1628-43d4-85c1-440c367f6f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.987 2 INFO nova.virt.libvirt.driver [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Deleting instance files /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_del
Oct 09 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.988 2 INFO nova.virt.libvirt.driver [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Deletion of /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_del complete
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.025 2 INFO nova.compute.manager [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Took 0.49 seconds to destroy the instance on the hypervisor.
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.026 2 DEBUG oslo.service.loopingcall [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.027 2 DEBUG nova.compute.manager [-] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.027 2 DEBUG nova.network.neutron [-] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.395 2 DEBUG nova.network.neutron [-] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.403 2 INFO nova.compute.manager [-] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Took 0.38 seconds to deallocate network for instance.
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.434 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.434 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.474 2 DEBUG oslo_concurrency.processutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:57:19 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:57:19 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3832152381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.835 2 DEBUG oslo_concurrency.processutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.840 2 DEBUG nova.compute.provider_tree [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.851 2 DEBUG nova.scheduler.client.report [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.863 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.881 2 INFO nova.scheduler.client.report [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01
Oct 09 09:57:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:19.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.930 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:20 compute-1 ceph-mon[9795]: pgmap v724: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 4.7 KiB/s wr, 1 op/s
Oct 09 09:57:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:57:20 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3832152381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.483 2 DEBUG nova.compute.manager [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-changed-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.483 2 DEBUG nova.compute.manager [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing instance network info cache due to event network-changed-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.483 2 DEBUG oslo_concurrency.lockutils [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.483 2 DEBUG oslo_concurrency.lockutils [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.484 2 DEBUG nova.network.neutron [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing network info cache for port 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 09:57:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:20.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.870 2 DEBUG nova.compute.manager [req-9669c6de-ca1b-40fd-b041-1c69dca7f312 req-6f2fddd2-3baa-4bb7-a9d6-2a8212e42af3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.870 2 DEBUG oslo_concurrency.lockutils [req-9669c6de-ca1b-40fd-b041-1c69dca7f312 req-6f2fddd2-3baa-4bb7-a9d6-2a8212e42af3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.871 2 DEBUG oslo_concurrency.lockutils [req-9669c6de-ca1b-40fd-b041-1c69dca7f312 req-6f2fddd2-3baa-4bb7-a9d6-2a8212e42af3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.871 2 DEBUG oslo_concurrency.lockutils [req-9669c6de-ca1b-40fd-b041-1c69dca7f312 req-6f2fddd2-3baa-4bb7-a9d6-2a8212e42af3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.871 2 DEBUG nova.compute.manager [req-9669c6de-ca1b-40fd-b041-1c69dca7f312 req-6f2fddd2-3baa-4bb7-a9d6-2a8212e42af3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] No waiting events found dispatching network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.871 2 WARNING nova.compute.manager [req-9669c6de-ca1b-40fd-b041-1c69dca7f312 req-6f2fddd2-3baa-4bb7-a9d6-2a8212e42af3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received unexpected event network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f for instance with vm_state deleted and task_state None.
Oct 09 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.914 2 DEBUG nova.network.neutron [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 09 09:57:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:21.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:21 compute-1 nova_compute[162974]: 2025-10-09 09:57:21.929 2 DEBUG nova.network.neutron [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 09 09:57:21 compute-1 nova_compute[162974]: 2025-10-09 09:57:21.930 2 DEBUG oslo_concurrency.lockutils [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:57:21 compute-1 nova_compute[162974]: 2025-10-09 09:57:21.930 2 DEBUG nova.compute.manager [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-deleted-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:57:22 compute-1 ceph-mon[9795]: pgmap v725: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 4.7 KiB/s wr, 1 op/s
Oct 09 09:57:22 compute-1 nova_compute[162974]: 2025-10-09 09:57:22.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:57:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:22.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:57:23 compute-1 nova_compute[162974]: 2025-10-09 09:57:23.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:23.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:24 compute-1 ceph-mon[9795]: pgmap v726: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 9.8 KiB/s wr, 30 op/s
Oct 09 09:57:24 compute-1 podman[167247]: 2025-10-09 09:57:24.542330785 +0000 UTC m=+0.046361519 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true)
Oct 09 09:57:24 compute-1 podman[167246]: 2025-10-09 09:57:24.554248419 +0000 UTC m=+0.056443572 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Oct 09 09:57:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:57:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:24.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:57:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:57:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:25.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:57:26 compute-1 ceph-mon[9795]: pgmap v727: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct 09 09:57:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:26.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:27 compute-1 nova_compute[162974]: 2025-10-09 09:57:27.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:27 compute-1 nova_compute[162974]: 2025-10-09 09:57:27.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:27 compute-1 nova_compute[162974]: 2025-10-09 09:57:27.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:57:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:27.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:57:28 compute-1 ceph-mon[9795]: pgmap v728: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Oct 09 09:57:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:28.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:28 compute-1 nova_compute[162974]: 2025-10-09 09:57:28.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:29.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:30 compute-1 ceph-mon[9795]: pgmap v729: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct 09 09:57:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:30.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:31 compute-1 sudo[167284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:57:31 compute-1 sudo[167284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:57:31 compute-1 sudo[167284]: pam_unix(sudo:session): session closed for user root
Oct 09 09:57:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:31.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:32 compute-1 ceph-mon[9795]: pgmap v730: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct 09 09:57:32 compute-1 nova_compute[162974]: 2025-10-09 09:57:32.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:32.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:33 compute-1 ceph-mon[9795]: pgmap v731: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct 09 09:57:33 compute-1 nova_compute[162974]: 2025-10-09 09:57:33.767 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760003838.765122, e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:57:33 compute-1 nova_compute[162974]: 2025-10-09 09:57:33.768 2 INFO nova.compute.manager [-] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] VM Stopped (Lifecycle Event)
Oct 09 09:57:33 compute-1 nova_compute[162974]: 2025-10-09 09:57:33.798 2 DEBUG nova.compute.manager [None req-402efd19-3943-4f94-a2ab-db2c1852f20e - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:57:33 compute-1 nova_compute[162974]: 2025-10-09 09:57:33.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:33.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:34.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:35 compute-1 ceph-mon[9795]: pgmap v732: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:57:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.358812) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855358843, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2388, "num_deletes": 251, "total_data_size": 6396320, "memory_usage": 6495288, "flush_reason": "Manual Compaction"}
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855369361, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 4153932, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20689, "largest_seqno": 23072, "table_properties": {"data_size": 4144126, "index_size": 6236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20058, "raw_average_key_size": 20, "raw_value_size": 4124479, "raw_average_value_size": 4187, "num_data_blocks": 272, "num_entries": 985, "num_filter_entries": 985, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003650, "oldest_key_time": 1760003650, "file_creation_time": 1760003855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 10721 microseconds, and 7612 cpu microseconds.
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.369535) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 4153932 bytes OK
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.369623) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.370064) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.370075) EVENT_LOG_v1 {"time_micros": 1760003855370072, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.370086) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 6385725, prev total WAL file size 6385725, number of live WAL files 2.
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.372208) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(4056KB)], [39(12MB)]
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855372231, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 16775535, "oldest_snapshot_seqno": -1}
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5402 keys, 14601163 bytes, temperature: kUnknown
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855415839, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 14601163, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14562758, "index_size": 23767, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 136207, "raw_average_key_size": 25, "raw_value_size": 14462275, "raw_average_value_size": 2677, "num_data_blocks": 981, "num_entries": 5402, "num_filter_entries": 5402, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760003855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.416441) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14601163 bytes
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.416903) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 380.2 rd, 331.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.0 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 5926, records dropped: 524 output_compression: NoCompression
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.416916) EVENT_LOG_v1 {"time_micros": 1760003855416911, "job": 22, "event": "compaction_finished", "compaction_time_micros": 44118, "compaction_time_cpu_micros": 20863, "output_level": 6, "num_output_files": 1, "total_output_size": 14601163, "num_input_records": 5926, "num_output_records": 5402, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855417869, "job": 22, "event": "table_file_deletion", "file_number": 41}
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855419648, "job": 22, "event": "table_file_deletion", "file_number": 39}
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.371605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.419773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.419776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.419777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.419778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.419779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:57:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:35.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:36 compute-1 podman[167311]: 2025-10-09 09:57:36.578348912 +0000 UTC m=+0.081682950 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct 09 09:57:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:57:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:36.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:57:37 compute-1 ceph-mon[9795]: pgmap v733: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:57:37 compute-1 nova_compute[162974]: 2025-10-09 09:57:37.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:37.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.222 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.222 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.237 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.293 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.294 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.298 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.298 2 INFO nova.compute.claims [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Claim successful on node compute-1.ctlplane.example.com
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.402 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:57:38 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:57:38 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/138502189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.750 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.755 2 DEBUG nova.compute.provider_tree [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.770 2 DEBUG nova.scheduler.client.report [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.791 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.792 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 09 09:57:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:57:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:38.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.837 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.837 2 DEBUG nova.network.neutron [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.853 2 INFO nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.862 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.924 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.924 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.925 2 INFO nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Creating image(s)
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.943 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.959 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.974 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.976 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.990 2 DEBUG nova.policy [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.025 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.026 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.026 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.027 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.044 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.047 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb e811a931-a3de-4684-8b2f-e916788f6ea9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.178 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb e811a931-a3de-4684-8b2f-e916788f6ea9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.222 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.275 2 DEBUG nova.objects.instance [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid e811a931-a3de-4684-8b2f-e916788f6ea9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.290 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.290 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Ensure instance console log exists: /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.291 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.291 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.291 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:39 compute-1 ceph-mon[9795]: pgmap v734: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:57:39 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/138502189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.555 2 DEBUG nova.network.neutron [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Successfully created port: 5fdcca80-237d-4123-b2d6-a46f90186d0b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 09 09:57:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:57:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:39.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:57:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:40.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:41 compute-1 ceph-mon[9795]: pgmap v735: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:57:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:41.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:42 compute-1 nova_compute[162974]: 2025-10-09 09:57:42.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:42.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:43 compute-1 ceph-mon[9795]: pgmap v736: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:57:43 compute-1 nova_compute[162974]: 2025-10-09 09:57:43.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:43.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:57:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:44.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:57:44 compute-1 nova_compute[162974]: 2025-10-09 09:57:44.838 2 DEBUG nova.network.neutron [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Successfully updated port: 5fdcca80-237d-4123-b2d6-a46f90186d0b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 09 09:57:44 compute-1 nova_compute[162974]: 2025-10-09 09:57:44.852 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:57:44 compute-1 nova_compute[162974]: 2025-10-09 09:57:44.852 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:57:44 compute-1 nova_compute[162974]: 2025-10-09 09:57:44.852 2 DEBUG nova.network.neutron [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 09 09:57:44 compute-1 nova_compute[162974]: 2025-10-09 09:57:44.927 2 DEBUG nova.compute.manager [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-changed-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:57:44 compute-1 nova_compute[162974]: 2025-10-09 09:57:44.927 2 DEBUG nova.compute.manager [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Refreshing instance network info cache due to event network-changed-5fdcca80-237d-4123-b2d6-a46f90186d0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 09:57:44 compute-1 nova_compute[162974]: 2025-10-09 09:57:44.927 2 DEBUG oslo_concurrency.lockutils [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:57:45 compute-1 ceph-mon[9795]: pgmap v737: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:57:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:57:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:45.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:57:45 compute-1 nova_compute[162974]: 2025-10-09 09:57:45.964 2 DEBUG nova.network.neutron [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 09 09:57:46 compute-1 podman[167528]: 2025-10-09 09:57:46.525232153 +0000 UTC m=+0.037298417 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 09 09:57:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:46.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:47 compute-1 ceph-mon[9795]: pgmap v738: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:57:47 compute-1 nova_compute[162974]: 2025-10-09 09:57:47.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:47.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.168 2 DEBUG nova.network.neutron [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updating instance_info_cache with network_info: [{"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.181 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.181 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Instance network_info: |[{"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.181 2 DEBUG oslo_concurrency.lockutils [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.182 2 DEBUG nova.network.neutron [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Refreshing network info cache for port 5fdcca80-237d-4123-b2d6-a46f90186d0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.184 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Start _get_guest_xml network_info=[{"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.187 2 WARNING nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.190 2 DEBUG nova.virt.libvirt.host [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.191 2 DEBUG nova.virt.libvirt.host [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.194 2 DEBUG nova.virt.libvirt.host [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.195 2 DEBUG nova.virt.libvirt.host [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.195 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.195 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.195 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.196 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.196 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.196 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.196 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.196 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.197 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.197 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.197 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.197 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.199 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:57:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 09:57:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2737454131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.540 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.557 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.559 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:48.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 09:57:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1167411416' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.897 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.898 2 DEBUG nova.virt.libvirt.vif [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1899878609',display_name='tempest-TestNetworkBasicOps-server-1899878609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1899878609',id=4,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZm4C6LRAPtfAr5m77K3NqQxZMtrMltDZaOJjL5VWwqcCmgw5WghdaHagMLuObgYdNXZ08m9cLFMwpCyPUmMwXoTGjd15bkV3f92hF1qRvuScT4iCVTrgjr7uJ/wKpdPQ==',key_name='tempest-TestNetworkBasicOps-1948988860',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-u8otko1l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:57:38Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e811a931-a3de-4684-8b2f-e916788f6ea9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.899 2 DEBUG nova.network.os_vif_util [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.899 2 DEBUG nova.network.os_vif_util [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:48:22,bridge_name='br-int',has_traffic_filtering=True,id=5fdcca80-237d-4123-b2d6-a46f90186d0b,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fdcca80-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.900 2 DEBUG nova.objects.instance [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid e811a931-a3de-4684-8b2f-e916788f6ea9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.912 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] End _get_guest_xml xml=<domain type="kvm">
Oct 09 09:57:48 compute-1 nova_compute[162974]:   <uuid>e811a931-a3de-4684-8b2f-e916788f6ea9</uuid>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   <name>instance-00000004</name>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   <memory>131072</memory>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   <vcpu>1</vcpu>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   <metadata>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <nova:name>tempest-TestNetworkBasicOps-server-1899878609</nova:name>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <nova:creationTime>2025-10-09 09:57:48</nova:creationTime>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <nova:flavor name="m1.nano">
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <nova:memory>128</nova:memory>
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <nova:disk>1</nova:disk>
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <nova:swap>0</nova:swap>
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <nova:vcpus>1</nova:vcpus>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       </nova:flavor>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <nova:owner>
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       </nova:owner>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <nova:ports>
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <nova:port uuid="5fdcca80-237d-4123-b2d6-a46f90186d0b">
Oct 09 09:57:48 compute-1 nova_compute[162974]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:         </nova:port>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       </nova:ports>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     </nova:instance>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   </metadata>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   <sysinfo type="smbios">
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <system>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <entry name="manufacturer">RDO</entry>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <entry name="product">OpenStack Compute</entry>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <entry name="serial">e811a931-a3de-4684-8b2f-e916788f6ea9</entry>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <entry name="uuid">e811a931-a3de-4684-8b2f-e916788f6ea9</entry>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <entry name="family">Virtual Machine</entry>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     </system>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   </sysinfo>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   <os>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <boot dev="hd"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <smbios mode="sysinfo"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   </os>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   <features>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <acpi/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <apic/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <vmcoreinfo/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   </features>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   <clock offset="utc">
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <timer name="hpet" present="no"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   </clock>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   <cpu mode="host-model" match="exact">
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   </cpu>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   <devices>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <disk type="network" device="disk">
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <driver type="raw" cache="none"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <source protocol="rbd" name="vms/e811a931-a3de-4684-8b2f-e916788f6ea9_disk">
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <host name="192.168.122.100" port="6789"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <host name="192.168.122.102" port="6789"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <host name="192.168.122.101" port="6789"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       </source>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <auth username="openstack">
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       </auth>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <target dev="vda" bus="virtio"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <disk type="network" device="cdrom">
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <driver type="raw" cache="none"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <source protocol="rbd" name="vms/e811a931-a3de-4684-8b2f-e916788f6ea9_disk.config">
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <host name="192.168.122.100" port="6789"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <host name="192.168.122.102" port="6789"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <host name="192.168.122.101" port="6789"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       </source>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <auth username="openstack">
Oct 09 09:57:48 compute-1 nova_compute[162974]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       </auth>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <target dev="sda" bus="sata"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <interface type="ethernet">
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <mac address="fa:16:3e:00:48:22"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <model type="virtio"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <mtu size="1442"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <target dev="tap5fdcca80-23"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     </interface>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <serial type="pty">
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <log file="/var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/console.log" append="off"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     </serial>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <video>
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <model type="virtio"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     </video>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <input type="tablet" bus="usb"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <rng model="virtio">
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <backend model="random">/dev/urandom</backend>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     </rng>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <controller type="usb" index="0"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     <memballoon model="virtio">
Oct 09 09:57:48 compute-1 nova_compute[162974]:       <stats period="10"/>
Oct 09 09:57:48 compute-1 nova_compute[162974]:     </memballoon>
Oct 09 09:57:48 compute-1 nova_compute[162974]:   </devices>
Oct 09 09:57:48 compute-1 nova_compute[162974]: </domain>
Oct 09 09:57:48 compute-1 nova_compute[162974]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.913 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Preparing to wait for external event network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.913 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.914 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.914 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.914 2 DEBUG nova.virt.libvirt.vif [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1899878609',display_name='tempest-TestNetworkBasicOps-server-1899878609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1899878609',id=4,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZm4C6LRAPtfAr5m77K3NqQxZMtrMltDZaOJjL5VWwqcCmgw5WghdaHagMLuObgYdNXZ08m9cLFMwpCyPUmMwXoTGjd15bkV3f92hF1qRvuScT4iCVTrgjr7uJ/wKpdPQ==',key_name='tempest-TestNetworkBasicOps-1948988860',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-u8otko1l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:57:38Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e811a931-a3de-4684-8b2f-e916788f6ea9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.915 2 DEBUG nova.network.os_vif_util [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.915 2 DEBUG nova.network.os_vif_util [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:48:22,bridge_name='br-int',has_traffic_filtering=True,id=5fdcca80-237d-4123-b2d6-a46f90186d0b,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fdcca80-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.915 2 DEBUG os_vif [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:48:22,bridge_name='br-int',has_traffic_filtering=True,id=5fdcca80-237d-4123-b2d6-a46f90186d0b,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fdcca80-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fdcca80-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.919 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fdcca80-23, col_values=(('external_ids', {'iface-id': '5fdcca80-237d-4123-b2d6-a46f90186d0b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:48:22', 'vm-uuid': 'e811a931-a3de-4684-8b2f-e916788f6ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:48 compute-1 NetworkManager[982]: <info>  [1760003868.9206] manager: (tap5fdcca80-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.924 2 INFO os_vif [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:48:22,bridge_name='br-int',has_traffic_filtering=True,id=5fdcca80-237d-4123-b2d6-a46f90186d0b,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fdcca80-23')
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.966 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.967 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.967 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:00:48:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.967 2 INFO nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Using config drive
Oct 09 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.985 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.132 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.132 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.133 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.133 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.133 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.173 2 DEBUG nova.network.neutron [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updated VIF entry in instance network info cache for port 5fdcca80-237d-4123-b2d6-a46f90186d0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.174 2 DEBUG nova.network.neutron [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updating instance_info_cache with network_info: [{"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.186 2 DEBUG oslo_concurrency.lockutils [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.255 2 INFO nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Creating config drive at /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/disk.config
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.259 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv5rdh7hh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.379 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv5rdh7hh" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.399 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.402 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/disk.config e811a931-a3de-4684-8b2f-e916788f6ea9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:57:49 compute-1 ceph-mon[9795]: pgmap v739: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:57:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2737454131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:57:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1167411416' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:57:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:57:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/938447497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.476 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.484 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/disk.config e811a931-a3de-4684-8b2f-e916788f6ea9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.485 2 INFO nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Deleting local config drive /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/disk.config because it was imported into RBD.
Oct 09 09:57:49 compute-1 kernel: tap5fdcca80-23: entered promiscuous mode
Oct 09 09:57:49 compute-1 NetworkManager[982]: <info>  [1760003869.5159] manager: (tap5fdcca80-23): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct 09 09:57:49 compute-1 ovn_controller[62080]: 2025-10-09T09:57:49Z|00055|binding|INFO|Claiming lport 5fdcca80-237d-4123-b2d6-a46f90186d0b for this chassis.
Oct 09 09:57:49 compute-1 ovn_controller[62080]: 2025-10-09T09:57:49Z|00056|binding|INFO|5fdcca80-237d-4123-b2d6-a46f90186d0b: Claiming fa:16:3e:00:48:22 10.100.0.3
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.527 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:48:22 10.100.0.3'], port_security=['fa:16:3e:00:48:22 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e811a931-a3de-4684-8b2f-e916788f6ea9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1ab824ab-8ac2-4d9c-9d6e-9bbdb4458228', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6496ebe5-cfc3-4a35-b1e6-27021c277fad, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=5fdcca80-237d-4123-b2d6-a46f90186d0b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.527 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 5fdcca80-237d-4123-b2d6-a46f90186d0b in datapath 48ce5fca-3386-4b8a-82e2-88fc71a94881 bound to our chassis
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.528 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48ce5fca-3386-4b8a-82e2-88fc71a94881
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.537 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[dc763a01-c71a-4385-bba5-1a2d27011448]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.537 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48ce5fca-31 in ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.539 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48ce5fca-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.539 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[14235a8e-6c8c-47d0-b0ea-4ff15b62fa46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.540 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[eb51ca5a-1016-4f66-a5f1-612133823d51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 systemd-udevd[167704]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:57:49 compute-1 systemd-machined[120683]: New machine qemu-3-instance-00000004.
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.551 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[abc1ba05-65ba-4416-853d-616480e105a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 NetworkManager[982]: <info>  [1760003869.5552] device (tap5fdcca80-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:57:49 compute-1 NetworkManager[982]: <info>  [1760003869.5558] device (tap5fdcca80-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 09:57:49 compute-1 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.576 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[5e654eb3-54f8-42ea-8022-c299bac52527]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:49 compute-1 ovn_controller[62080]: 2025-10-09T09:57:49Z|00057|binding|INFO|Setting lport 5fdcca80-237d-4123-b2d6-a46f90186d0b ovn-installed in OVS
Oct 09 09:57:49 compute-1 ovn_controller[62080]: 2025-10-09T09:57:49Z|00058|binding|INFO|Setting lport 5fdcca80-237d-4123-b2d6-a46f90186d0b up in Southbound
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.603 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9cb2cb-9e42-424d-bcc2-9e53eac8e6a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 NetworkManager[982]: <info>  [1760003869.6071] manager: (tap48ce5fca-30): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.608 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[1e140195-8e76-40f9-9b78-f2d212c21bf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.633 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[ef11bf2b-ebf1-4718-84df-3c5bcd3be953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.636 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5aff04-5e9b-465a-aa00-e590fa20cf8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 NetworkManager[982]: <info>  [1760003869.6547] device (tap48ce5fca-30): carrier: link connected
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.658 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b405e9-004d-43b3-85e6-cac21eef1e79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.669 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[844fa257-a68f-485d-bd61-c027dcbb01ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48ce5fca-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:a8:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 155322, 'reachable_time': 20529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 167729, 'error': None, 'target': 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.671 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.671 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.683 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[0e8aec32-cf46-4256-b191-775854531a1f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:a8ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 155322, 'tstamp': 155322}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 167730, 'error': None, 'target': 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.699 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e183ae-6464-4171-8588-81fb6d42f456]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48ce5fca-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:a8:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 155322, 'reachable_time': 20529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 167731, 'error': None, 'target': 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.729 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1a9ee2-7430-4288-8293-f839ada5b0ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.791 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[abb8a40d-00bf-4adb-b55c-50fddb53f42f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.792 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48ce5fca-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.793 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.793 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48ce5fca-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:49 compute-1 NetworkManager[982]: <info>  [1760003869.7968] manager: (tap48ce5fca-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct 09 09:57:49 compute-1 kernel: tap48ce5fca-30: entered promiscuous mode
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.803 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48ce5fca-30, col_values=(('external_ids', {'iface-id': 'b85a0af7-8e0c-4129-9420-36103d8f1eb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:49 compute-1 ovn_controller[62080]: 2025-10-09T09:57:49Z|00059|binding|INFO|Releasing lport b85a0af7-8e0c-4129-9420-36103d8f1eb6 from this chassis (sb_readonly=0)
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.805 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48ce5fca-3386-4b8a-82e2-88fc71a94881.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48ce5fca-3386-4b8a-82e2-88fc71a94881.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.805 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[11593792-87e7-477e-8b57-cf982b81a5de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.806 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: global
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     log         /dev/log local0 debug
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     log-tag     haproxy-metadata-proxy-48ce5fca-3386-4b8a-82e2-88fc71a94881
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     user        root
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     group       root
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     maxconn     1024
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     pidfile     /var/lib/neutron/external/pids/48ce5fca-3386-4b8a-82e2-88fc71a94881.pid.haproxy
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     daemon
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: defaults
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     log global
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     mode http
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     option httplog
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     option dontlognull
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     option http-server-close
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     option forwardfor
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     retries                 3
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     timeout http-request    30s
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     timeout connect         30s
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     timeout client          32s
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     timeout server          32s
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     timeout http-keep-alive 30s
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: listen listener
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     bind 169.254.169.254:80
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:     http-request add-header X-OVN-Network-ID 48ce5fca-3386-4b8a-82e2-88fc71a94881
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 09 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.808 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'env', 'PROCESS_TAG=haproxy-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48ce5fca-3386-4b8a-82e2-88fc71a94881.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.853 2 DEBUG nova.compute.manager [req-5ab289bc-50f5-4ed6-9190-55f5f303a2e7 req-0c35abbb-7c16-4d66-b7f4-ffe9614ee18c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.853 2 DEBUG oslo_concurrency.lockutils [req-5ab289bc-50f5-4ed6-9190-55f5f303a2e7 req-0c35abbb-7c16-4d66-b7f4-ffe9614ee18c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.854 2 DEBUG oslo_concurrency.lockutils [req-5ab289bc-50f5-4ed6-9190-55f5f303a2e7 req-0c35abbb-7c16-4d66-b7f4-ffe9614ee18c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.854 2 DEBUG oslo_concurrency.lockutils [req-5ab289bc-50f5-4ed6-9190-55f5f303a2e7 req-0c35abbb-7c16-4d66-b7f4-ffe9614ee18c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.854 2 DEBUG nova.compute.manager [req-5ab289bc-50f5-4ed6-9190-55f5f303a2e7 req-0c35abbb-7c16-4d66-b7f4-ffe9614ee18c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Processing event network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.914 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.916 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5054MB free_disk=59.967525482177734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.916 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.916 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:57:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:49.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.982 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Instance e811a931-a3de-4684-8b2f-e916788f6ea9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.983 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.983 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.011 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:57:50 compute-1 podman[167813]: 2025-10-09 09:57:50.149661685 +0000 UTC m=+0.038714668 container create 50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 09 09:57:50 compute-1 systemd[1]: Started libpod-conmon-50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f.scope.
Oct 09 09:57:50 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:57:50 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596f16e2f74877f4af0737f9ce5c377193e245dbce014e179b5c36f8fe3efb0f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 09:57:50 compute-1 podman[167813]: 2025-10-09 09:57:50.202863794 +0000 UTC m=+0.091916787 container init 50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 09 09:57:50 compute-1 podman[167813]: 2025-10-09 09:57:50.207757152 +0000 UTC m=+0.096810134 container start 50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:57:50 compute-1 podman[167813]: 2025-10-09 09:57:50.130936333 +0000 UTC m=+0.019989326 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 09:57:50 compute-1 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[167834]: [NOTICE]   (167838) : New worker (167840) forked
Oct 09 09:57:50 compute-1 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[167834]: [NOTICE]   (167838) : Loading success.
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.383 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.388 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.394 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003870.3939822, e811a931-a3de-4684-8b2f-e916788f6ea9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.394 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] VM Started (Lifecycle Event)
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.396 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.398 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.400 2 INFO nova.virt.libvirt.driver [-] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Instance spawned successfully.
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.400 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.413 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.419 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.422 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.422 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.422 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.423 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.423 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.423 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:57:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/938447497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:57:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3090346491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.427 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.431 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.431 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.515s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.459 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.459 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003870.394087, e811a931-a3de-4684-8b2f-e916788f6ea9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.460 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] VM Paused (Lifecycle Event)
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.474 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.476 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003870.3977108, e811a931-a3de-4684-8b2f-e916788f6ea9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.476 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] VM Resumed (Lifecycle Event)
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.483 2 INFO nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Took 11.56 seconds to spawn the instance on the hypervisor.
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.483 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.498 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.500 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.520 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.542 2 INFO nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Took 12.27 seconds to build instance.
Oct 09 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.556 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:50.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.431 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.431 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.432 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.432 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:57:51 compute-1 ceph-mon[9795]: pgmap v740: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:57:51 compute-1 sudo[167848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:57:51 compute-1 sudo[167848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:57:51 compute-1 sudo[167848]: pam_unix(sudo:session): session closed for user root
Oct 09 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.923 2 DEBUG nova.compute.manager [req-5e9584d9-12ca-4074-9d02-3d4811e073d9 req-531e81c4-6b7f-495c-adf3-25ac264a5aa7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.923 2 DEBUG oslo_concurrency.lockutils [req-5e9584d9-12ca-4074-9d02-3d4811e073d9 req-531e81c4-6b7f-495c-adf3-25ac264a5aa7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.924 2 DEBUG oslo_concurrency.lockutils [req-5e9584d9-12ca-4074-9d02-3d4811e073d9 req-531e81c4-6b7f-495c-adf3-25ac264a5aa7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.924 2 DEBUG oslo_concurrency.lockutils [req-5e9584d9-12ca-4074-9d02-3d4811e073d9 req-531e81c4-6b7f-495c-adf3-25ac264a5aa7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.924 2 DEBUG nova.compute.manager [req-5e9584d9-12ca-4074-9d02-3d4811e073d9 req-531e81c4-6b7f-495c-adf3-25ac264a5aa7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] No waiting events found dispatching network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.924 2 WARNING nova.compute.manager [req-5e9584d9-12ca-4074-9d02-3d4811e073d9 req-531e81c4-6b7f-495c-adf3-25ac264a5aa7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received unexpected event network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b for instance with vm_state active and task_state None.
Oct 09 09:57:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:51.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:52 compute-1 nova_compute[162974]: 2025-10-09 09:57:52.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:52.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.401 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.402 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquired lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.403 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 09 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.403 2 DEBUG nova.objects.instance [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e811a931-a3de-4684-8b2f-e916788f6ea9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:57:53 compute-1 ceph-mon[9795]: pgmap v741: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 09 09:57:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3044383655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3402390666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:53 compute-1 ovn_controller[62080]: 2025-10-09T09:57:53Z|00060|binding|INFO|Releasing lport b85a0af7-8e0c-4129-9420-36103d8f1eb6 from this chassis (sb_readonly=0)
Oct 09 09:57:53 compute-1 NetworkManager[982]: <info>  [1760003873.6378] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct 09 09:57:53 compute-1 NetworkManager[982]: <info>  [1760003873.6386] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 09 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:53 compute-1 ovn_controller[62080]: 2025-10-09T09:57:53Z|00061|binding|INFO|Releasing lport b85a0af7-8e0c-4129-9420-36103d8f1eb6 from this chassis (sb_readonly=0)
Oct 09 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:53.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:57:54 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3718288945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.425 2 DEBUG nova.compute.manager [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-changed-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.425 2 DEBUG nova.compute.manager [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Refreshing instance network info cache due to event network-changed-5fdcca80-237d-4123-b2d6-a46f90186d0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.426 2 DEBUG oslo_concurrency.lockutils [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:57:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/872108722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3718288945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:54.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.972 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updating instance_info_cache with network_info: [{"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.987 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Releasing lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.988 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 09 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.988 2 DEBUG oslo_concurrency.lockutils [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.988 2 DEBUG nova.network.neutron [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Refreshing network info cache for port 5fdcca80-237d-4123-b2d6-a46f90186d0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.989 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.989 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.989 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:55 compute-1 ceph-mon[9795]: pgmap v742: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct 09 09:57:55 compute-1 podman[167876]: 2025-10-09 09:57:55.540428542 +0000 UTC m=+0.048246464 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 09 09:57:55 compute-1 podman[167877]: 2025-10-09 09:57:55.54625549 +0000 UTC m=+0.051515218 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 09 09:57:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:55.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:55 compute-1 nova_compute[162974]: 2025-10-09 09:57:55.984 2 DEBUG nova.network.neutron [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updated VIF entry in instance network info cache for port 5fdcca80-237d-4123-b2d6-a46f90186d0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 09:57:55 compute-1 nova_compute[162974]: 2025-10-09 09:57:55.984 2 DEBUG nova.network.neutron [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updating instance_info_cache with network_info: [{"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:57:55 compute-1 nova_compute[162974]: 2025-10-09 09:57:55.997 2 DEBUG oslo_concurrency.lockutils [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:57:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:56.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:57 compute-1 ceph-mon[9795]: pgmap v743: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct 09 09:57:57 compute-1 nova_compute[162974]: 2025-10-09 09:57:57.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:57.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:58.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:58 compute-1 nova_compute[162974]: 2025-10-09 09:57:58.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:57:59 compute-1 ceph-mon[9795]: pgmap v744: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 09 09:57:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:57:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:59.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:00.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:01 compute-1 ovn_controller[62080]: 2025-10-09T09:58:01Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:48:22 10.100.0.3
Oct 09 09:58:01 compute-1 ovn_controller[62080]: 2025-10-09T09:58:01Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:48:22 10.100.0.3
Oct 09 09:58:01 compute-1 ceph-mon[9795]: pgmap v745: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 09 09:58:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:01.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:02 compute-1 nova_compute[162974]: 2025-10-09 09:58:02.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:02.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:03 compute-1 ceph-mon[9795]: pgmap v746: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 09 09:58:03 compute-1 nova_compute[162974]: 2025-10-09 09:58:03.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:03.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:04.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:05 compute-1 ceph-mon[9795]: pgmap v747: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Oct 09 09:58:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:58:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:05.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:06 compute-1 nova_compute[162974]: 2025-10-09 09:58:06.780 2 INFO nova.compute.manager [None req-b9af8cfb-f0a5-41ac-beff-3ef3de796fb0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Get console output
Oct 09 09:58:06 compute-1 nova_compute[162974]: 2025-10-09 09:58:06.784 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 09 09:58:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:06.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:07 compute-1 ceph-mon[9795]: pgmap v748: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 09 09:58:07 compute-1 podman[167919]: 2025-10-09 09:58:07.557304627 +0000 UTC m=+0.062767366 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 09 09:58:07 compute-1 nova_compute[162974]: 2025-10-09 09:58:07.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:07.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:08 compute-1 nova_compute[162974]: 2025-10-09 09:58:08.758 2 DEBUG nova.compute.manager [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-changed-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:58:08 compute-1 nova_compute[162974]: 2025-10-09 09:58:08.758 2 DEBUG nova.compute.manager [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Refreshing instance network info cache due to event network-changed-5fdcca80-237d-4123-b2d6-a46f90186d0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 09:58:08 compute-1 nova_compute[162974]: 2025-10-09 09:58:08.758 2 DEBUG oslo_concurrency.lockutils [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:58:08 compute-1 nova_compute[162974]: 2025-10-09 09:58:08.758 2 DEBUG oslo_concurrency.lockutils [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:58:08 compute-1 nova_compute[162974]: 2025-10-09 09:58:08.759 2 DEBUG nova.network.neutron [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Refreshing network info cache for port 5fdcca80-237d-4123-b2d6-a46f90186d0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 09:58:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:08.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:08 compute-1 nova_compute[162974]: 2025-10-09 09:58:08.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:08 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:08.859 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:58:08 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:08.860 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 09:58:08 compute-1 nova_compute[162974]: 2025-10-09 09:58:08.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:09 compute-1 ceph-mon[9795]: pgmap v749: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 09:58:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:09.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:10.036 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:58:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:10.036 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:58:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:10.037 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:58:10 compute-1 sudo[167943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:58:10 compute-1 sudo[167943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:58:10 compute-1 sudo[167943]: pam_unix(sudo:session): session closed for user root
Oct 09 09:58:10 compute-1 sudo[167968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:58:10 compute-1 sudo[167968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:58:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:10.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:10 compute-1 sudo[167968]: pam_unix(sudo:session): session closed for user root
Oct 09 09:58:10 compute-1 nova_compute[162974]: 2025-10-09 09:58:10.946 2 DEBUG nova.network.neutron [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updated VIF entry in instance network info cache for port 5fdcca80-237d-4123-b2d6-a46f90186d0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 09:58:10 compute-1 nova_compute[162974]: 2025-10-09 09:58:10.947 2 DEBUG nova.network.neutron [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updating instance_info_cache with network_info: [{"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:58:10 compute-1 nova_compute[162974]: 2025-10-09 09:58:10.962 2 DEBUG oslo_concurrency.lockutils [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:58:11 compute-1 ceph-mon[9795]: pgmap v750: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 09 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 09 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:58:11 compute-1 ceph-mon[9795]: pgmap v751: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct 09 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:58:11 compute-1 sudo[168023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:58:11 compute-1 sudo[168023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:58:11 compute-1 sudo[168023]: pam_unix(sudo:session): session closed for user root
Oct 09 09:58:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:11.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/2643942600' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:58:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/2643942600' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:58:12 compute-1 nova_compute[162974]: 2025-10-09 09:58:12.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:12.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:13 compute-1 ceph-mon[9795]: pgmap v752: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 121 KiB/s wr, 26 op/s
Oct 09 09:58:13 compute-1 nova_compute[162974]: 2025-10-09 09:58:13.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:13.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:14.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:15 compute-1 sudo[168049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:58:15 compute-1 sudo[168049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:58:15 compute-1 sudo[168049]: pam_unix(sudo:session): session closed for user root
Oct 09 09:58:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:58:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:58:15 compute-1 ceph-mon[9795]: pgmap v753: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 121 KiB/s wr, 26 op/s
Oct 09 09:58:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2262883313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:15.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:16.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:17 compute-1 podman[168076]: 2025-10-09 09:58:17.535560672 +0000 UTC m=+0.043249079 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 09 09:58:17 compute-1 nova_compute[162974]: 2025-10-09 09:58:17.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:17 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:17.863 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:58:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:17.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:18 compute-1 ceph-mon[9795]: pgmap v754: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 17 KiB/s wr, 2 op/s
Oct 09 09:58:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:18.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:18 compute-1 nova_compute[162974]: 2025-10-09 09:58:18.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:20.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:20 compute-1 ceph-mon[9795]: pgmap v755: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 17 KiB/s wr, 2 op/s
Oct 09 09:58:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:58:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:20.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:22.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:22 compute-1 ceph-mon[9795]: pgmap v756: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Oct 09 09:58:22 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2004045298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:58:22 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3984953572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:58:22 compute-1 nova_compute[162974]: 2025-10-09 09:58:22.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:22.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:23 compute-1 nova_compute[162974]: 2025-10-09 09:58:23.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:24.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:24 compute-1 ceph-mon[9795]: pgmap v757: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:58:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:24.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:26.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:26 compute-1 ceph-mon[9795]: pgmap v758: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:58:26 compute-1 podman[168099]: 2025-10-09 09:58:26.534310018 +0000 UTC m=+0.036185023 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd)
Oct 09 09:58:26 compute-1 podman[168098]: 2025-10-09 09:58:26.558354358 +0000 UTC m=+0.062207822 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 09 09:58:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:26.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:27 compute-1 nova_compute[162974]: 2025-10-09 09:58:27.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:28.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:28 compute-1 ceph-mon[9795]: pgmap v759: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 09:58:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:28.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:28 compute-1 nova_compute[162974]: 2025-10-09 09:58:28.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:30.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:30 compute-1 ceph-mon[9795]: pgmap v760: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 09:58:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:30.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:32.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:32 compute-1 sudo[168134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:58:32 compute-1 sudo[168134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:58:32 compute-1 sudo[168134]: pam_unix(sudo:session): session closed for user root
Oct 09 09:58:32 compute-1 ceph-mon[9795]: pgmap v761: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 09:58:32 compute-1 nova_compute[162974]: 2025-10-09 09:58:32.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:32.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:33 compute-1 nova_compute[162974]: 2025-10-09 09:58:33.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:34.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:34 compute-1 ceph-mon[9795]: pgmap v762: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 09 09:58:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:34.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:58:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:36.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:36 compute-1 ceph-mon[9795]: pgmap v763: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 09 09:58:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:36.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:37 compute-1 nova_compute[162974]: 2025-10-09 09:58:37.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:38.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:38 compute-1 ceph-mon[9795]: pgmap v764: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct 09 09:58:38 compute-1 podman[168162]: 2025-10-09 09:58:38.587536016 +0000 UTC m=+0.085116119 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 09 09:58:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:38.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:38 compute-1 nova_compute[162974]: 2025-10-09 09:58:38.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:40.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:40 compute-1 ceph-mon[9795]: pgmap v765: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 09 09:58:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:40.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:42.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:42 compute-1 ceph-mon[9795]: pgmap v766: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 09 09:58:42 compute-1 nova_compute[162974]: 2025-10-09 09:58:42.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:42.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:43 compute-1 nova_compute[162974]: 2025-10-09 09:58:43.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:44.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:44 compute-1 ceph-mon[9795]: pgmap v767: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 09 09:58:44 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2037300683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:44.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.213 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.214 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.214 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.214 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.214 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.215 2 INFO nova.compute.manager [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Terminating instance
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.216 2 DEBUG nova.compute.manager [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 09 09:58:45 compute-1 kernel: tap5fdcca80-23 (unregistering): left promiscuous mode
Oct 09 09:58:45 compute-1 NetworkManager[982]: <info>  [1760003925.2549] device (tap5fdcca80-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:45 compute-1 ovn_controller[62080]: 2025-10-09T09:58:45Z|00062|binding|INFO|Releasing lport 5fdcca80-237d-4123-b2d6-a46f90186d0b from this chassis (sb_readonly=0)
Oct 09 09:58:45 compute-1 ovn_controller[62080]: 2025-10-09T09:58:45Z|00063|binding|INFO|Setting lport 5fdcca80-237d-4123-b2d6-a46f90186d0b down in Southbound
Oct 09 09:58:45 compute-1 ovn_controller[62080]: 2025-10-09T09:58:45Z|00064|binding|INFO|Removing iface tap5fdcca80-23 ovn-installed in OVS
Oct 09 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.275 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:48:22 10.100.0.3'], port_security=['fa:16:3e:00:48:22 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e811a931-a3de-4684-8b2f-e916788f6ea9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1ab824ab-8ac2-4d9c-9d6e-9bbdb4458228', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6496ebe5-cfc3-4a35-b1e6-27021c277fad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=5fdcca80-237d-4123-b2d6-a46f90186d0b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.277 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 5fdcca80-237d-4123-b2d6-a46f90186d0b in datapath 48ce5fca-3386-4b8a-82e2-88fc71a94881 unbound from our chassis
Oct 09 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.279 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48ce5fca-3386-4b8a-82e2-88fc71a94881, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 09 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.281 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[9e429a86-a00d-4575-80cd-3c6c393e4d6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.283 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 namespace which is not needed anymore
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:45 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 09 09:58:45 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 12.844s CPU time.
Oct 09 09:58:45 compute-1 systemd-machined[120683]: Machine qemu-3-instance-00000004 terminated.
Oct 09 09:58:45 compute-1 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[167834]: [NOTICE]   (167838) : haproxy version is 2.8.14-c23fe91
Oct 09 09:58:45 compute-1 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[167834]: [NOTICE]   (167838) : path to executable is /usr/sbin/haproxy
Oct 09 09:58:45 compute-1 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[167834]: [WARNING]  (167838) : Exiting Master process...
Oct 09 09:58:45 compute-1 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[167834]: [ALERT]    (167838) : Current worker (167840) exited with code 143 (Terminated)
Oct 09 09:58:45 compute-1 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[167834]: [WARNING]  (167838) : All workers exited. Exiting... (0)
Oct 09 09:58:45 compute-1 systemd[1]: libpod-50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f.scope: Deactivated successfully.
Oct 09 09:58:45 compute-1 podman[168210]: 2025-10-09 09:58:45.416311775 +0000 UTC m=+0.035098777 container died 50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.447 2 INFO nova.virt.libvirt.driver [-] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Instance destroyed successfully.
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.448 2 DEBUG nova.objects.instance [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid e811a931-a3de-4684-8b2f-e916788f6ea9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:58:45 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f-userdata-shm.mount: Deactivated successfully.
Oct 09 09:58:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-596f16e2f74877f4af0737f9ce5c377193e245dbce014e179b5c36f8fe3efb0f-merged.mount: Deactivated successfully.
Oct 09 09:58:45 compute-1 podman[168210]: 2025-10-09 09:58:45.457296423 +0000 UTC m=+0.076083426 container cleanup 50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.466 2 DEBUG nova.virt.libvirt.vif [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1899878609',display_name='tempest-TestNetworkBasicOps-server-1899878609',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1899878609',id=4,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZm4C6LRAPtfAr5m77K3NqQxZMtrMltDZaOJjL5VWwqcCmgw5WghdaHagMLuObgYdNXZ08m9cLFMwpCyPUmMwXoTGjd15bkV3f92hF1qRvuScT4iCVTrgjr7uJ/wKpdPQ==',key_name='tempest-TestNetworkBasicOps-1948988860',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:57:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-u8otko1l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:57:50Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e811a931-a3de-4684-8b2f-e916788f6ea9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.466 2 DEBUG nova.network.os_vif_util [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.467 2 DEBUG nova.network.os_vif_util [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:48:22,bridge_name='br-int',has_traffic_filtering=True,id=5fdcca80-237d-4123-b2d6-a46f90186d0b,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fdcca80-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.467 2 DEBUG os_vif [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:48:22,bridge_name='br-int',has_traffic_filtering=True,id=5fdcca80-237d-4123-b2d6-a46f90186d0b,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fdcca80-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:45 compute-1 systemd[1]: libpod-conmon-50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f.scope: Deactivated successfully.
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.469 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fdcca80-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.477 2 INFO os_vif [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:48:22,bridge_name='br-int',has_traffic_filtering=True,id=5fdcca80-237d-4123-b2d6-a46f90186d0b,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fdcca80-23')
Oct 09 09:58:45 compute-1 podman[168247]: 2025-10-09 09:58:45.521308034 +0000 UTC m=+0.033380638 container remove 50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.527 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[720d0179-5367-4419-bc10-84db61c90fe8]: (4, ('Thu Oct  9 09:58:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 (50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f)\n50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f\nThu Oct  9 09:58:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 (50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f)\n50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.529 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[41999993-6852-4203-86e7-b77736f5d515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.530 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48ce5fca-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:45 compute-1 kernel: tap48ce5fca-30: left promiscuous mode
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.550 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[577a0fe5-1939-49da-b2ca-14e0d3a5a140]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.574 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9621c2-8272-4470-8542-caaed53d67f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.575 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[24ceb8ca-9a6f-49a3-afbc-c1007751a59b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.591 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[375f1e35-3fbb-4c61-8529-affd292f0a59]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 155316, 'reachable_time': 32465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 168277, 'error': None, 'target': 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:58:45 compute-1 systemd[1]: run-netns-ovnmeta\x2d48ce5fca\x2d3386\x2d4b8a\x2d82e2\x2d88fc71a94881.mount: Deactivated successfully.
Oct 09 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.596 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 09 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.596 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f9f56f-8bb5-4ee3-9be6-820b0e732077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.615 2 DEBUG nova.compute.manager [req-57db1058-52c1-4f92-bd84-e45d5d330d53 req-042173fe-920f-4e47-b11f-0edb5ef22f14 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-vif-unplugged-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.615 2 DEBUG oslo_concurrency.lockutils [req-57db1058-52c1-4f92-bd84-e45d5d330d53 req-042173fe-920f-4e47-b11f-0edb5ef22f14 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.616 2 DEBUG oslo_concurrency.lockutils [req-57db1058-52c1-4f92-bd84-e45d5d330d53 req-042173fe-920f-4e47-b11f-0edb5ef22f14 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.616 2 DEBUG oslo_concurrency.lockutils [req-57db1058-52c1-4f92-bd84-e45d5d330d53 req-042173fe-920f-4e47-b11f-0edb5ef22f14 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.616 2 DEBUG nova.compute.manager [req-57db1058-52c1-4f92-bd84-e45d5d330d53 req-042173fe-920f-4e47-b11f-0edb5ef22f14 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] No waiting events found dispatching network-vif-unplugged-5fdcca80-237d-4123-b2d6-a46f90186d0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.617 2 DEBUG nova.compute.manager [req-57db1058-52c1-4f92-bd84-e45d5d330d53 req-042173fe-920f-4e47-b11f-0edb5ef22f14 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-vif-unplugged-5fdcca80-237d-4123-b2d6-a46f90186d0b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.656 2 INFO nova.virt.libvirt.driver [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Deleting instance files /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9_del
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.657 2 INFO nova.virt.libvirt.driver [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Deletion of /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9_del complete
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.695 2 INFO nova.compute.manager [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Took 0.48 seconds to destroy the instance on the hypervisor.
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.695 2 DEBUG oslo.service.loopingcall [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.695 2 DEBUG nova.compute.manager [-] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 09 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.696 2 DEBUG nova.network.neutron [-] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 09 09:58:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:46.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:46 compute-1 ceph-mon[9795]: pgmap v768: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 09 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.369 2 DEBUG nova.network.neutron [-] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.378 2 INFO nova.compute.manager [-] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Took 0.68 seconds to deallocate network for instance.
Oct 09 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.409 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.409 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.447 2 DEBUG oslo_concurrency.processutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:58:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:58:46 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1068777579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.807 2 DEBUG oslo_concurrency.processutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.813 2 DEBUG nova.compute.provider_tree [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.830 2 DEBUG nova.scheduler.client.report [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.844 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.862 2 INFO nova.scheduler.client.report [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance e811a931-a3de-4684-8b2f-e916788f6ea9
Oct 09 09:58:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:58:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:46.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.906 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:58:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1068777579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.675 2 DEBUG nova.compute.manager [req-6264c00b-216d-4eb5-b2f2-a53c1179a367 req-6d3381dd-c195-4d4b-a9a7-2b9084d4a9a0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.676 2 DEBUG oslo_concurrency.lockutils [req-6264c00b-216d-4eb5-b2f2-a53c1179a367 req-6d3381dd-c195-4d4b-a9a7-2b9084d4a9a0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.676 2 DEBUG oslo_concurrency.lockutils [req-6264c00b-216d-4eb5-b2f2-a53c1179a367 req-6d3381dd-c195-4d4b-a9a7-2b9084d4a9a0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.676 2 DEBUG oslo_concurrency.lockutils [req-6264c00b-216d-4eb5-b2f2-a53c1179a367 req-6d3381dd-c195-4d4b-a9a7-2b9084d4a9a0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.676 2 DEBUG nova.compute.manager [req-6264c00b-216d-4eb5-b2f2-a53c1179a367 req-6d3381dd-c195-4d4b-a9a7-2b9084d4a9a0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] No waiting events found dispatching network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.677 2 WARNING nova.compute.manager [req-6264c00b-216d-4eb5-b2f2-a53c1179a367 req-6d3381dd-c195-4d4b-a9a7-2b9084d4a9a0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received unexpected event network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b for instance with vm_state deleted and task_state None.
Oct 09 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.677 2 DEBUG nova.compute.manager [req-6264c00b-216d-4eb5-b2f2-a53c1179a367 req-6d3381dd-c195-4d4b-a9a7-2b9084d4a9a0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-vif-deleted-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:48.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:48 compute-1 ceph-mon[9795]: pgmap v769: 337 pgs: 337 active+clean; 48 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 259 KiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct 09 09:58:48 compute-1 podman[168302]: 2025-10-09 09:58:48.543241346 +0000 UTC m=+0.051489469 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 09:58:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:58:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:48.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:58:49 compute-1 nova_compute[162974]: 2025-10-09 09:58:49.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:49 compute-1 nova_compute[162974]: 2025-10-09 09:58:49.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:50.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:50 compute-1 nova_compute[162974]: 2025-10-09 09:58:50.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:50 compute-1 ceph-mon[9795]: pgmap v770: 337 pgs: 337 active+clean; 48 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct 09 09:58:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:58:50 compute-1 nova_compute[162974]: 2025-10-09 09:58:50.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:50.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.109 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.123 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.123 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.123 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.136 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.136 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.136 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.137 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.137 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:58:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:58:51 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/78441123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.492 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.741 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.742 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5028MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.742 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.743 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.786 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.786 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.798 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:58:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:52.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:52 compute-1 sudo[168366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:58:52 compute-1 sudo[168366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:58:52 compute-1 sudo[168366]: pam_unix(sudo:session): session closed for user root
Oct 09 09:58:52 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:58:52 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4092825429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:52 compute-1 nova_compute[162974]: 2025-10-09 09:58:52.152 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:58:52 compute-1 nova_compute[162974]: 2025-10-09 09:58:52.160 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:58:52 compute-1 nova_compute[162974]: 2025-10-09 09:58:52.176 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:58:52 compute-1 nova_compute[162974]: 2025-10-09 09:58:52.199 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:58:52 compute-1 nova_compute[162974]: 2025-10-09 09:58:52.200 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:58:52 compute-1 ceph-mon[9795]: pgmap v771: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 57 op/s
Oct 09 09:58:52 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/78441123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:52 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/4092825429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:52 compute-1 nova_compute[162974]: 2025-10-09 09:58:52.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:52.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:54.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:54 compute-1 nova_compute[162974]: 2025-10-09 09:58:54.192 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:54 compute-1 nova_compute[162974]: 2025-10-09 09:58:54.192 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:58:54 compute-1 nova_compute[162974]: 2025-10-09 09:58:54.192 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:58:54 compute-1 nova_compute[162974]: 2025-10-09 09:58:54.206 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 09:58:54 compute-1 nova_compute[162974]: 2025-10-09 09:58:54.206 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:54 compute-1 nova_compute[162974]: 2025-10-09 09:58:54.207 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:54 compute-1 ceph-mon[9795]: pgmap v772: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct 09 09:58:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3219886149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/912181250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:54.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:55 compute-1 nova_compute[162974]: 2025-10-09 09:58:55.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:55 compute-1 nova_compute[162974]: 2025-10-09 09:58:55.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/474588185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:55 compute-1 nova_compute[162974]: 2025-10-09 09:58:55.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:56.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:56 compute-1 ceph-mon[9795]: pgmap v773: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct 09 09:58:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3921144099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:56.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:57 compute-1 podman[168396]: 2025-10-09 09:58:57.52624737 +0000 UTC m=+0.037421204 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:58:57 compute-1 podman[168397]: 2025-10-09 09:58:57.53628571 +0000 UTC m=+0.044827182 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2)
Oct 09 09:58:57 compute-1 nova_compute[162974]: 2025-10-09 09:58:57.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:58:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:58:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:58.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:58:58 compute-1 ceph-mon[9795]: pgmap v774: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 57 op/s
Oct 09 09:58:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:58:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:58.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:00.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:00 compute-1 ceph-mon[9795]: pgmap v775: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Oct 09 09:59:00 compute-1 nova_compute[162974]: 2025-10-09 09:59:00.446 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760003925.4452076, e811a931-a3de-4684-8b2f-e916788f6ea9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:59:00 compute-1 nova_compute[162974]: 2025-10-09 09:59:00.446 2 INFO nova.compute.manager [-] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] VM Stopped (Lifecycle Event)
Oct 09 09:59:00 compute-1 nova_compute[162974]: 2025-10-09 09:59:00.462 2 DEBUG nova.compute.manager [None req-f15cf48f-4f4c-4fd0-8b9a-edb2e6d9b290 - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:59:00 compute-1 nova_compute[162974]: 2025-10-09 09:59:00.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:00.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:02.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:02 compute-1 ceph-mon[9795]: pgmap v776: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s
Oct 09 09:59:02 compute-1 nova_compute[162974]: 2025-10-09 09:59:02.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:02 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:59:02 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/552648784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:02.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:03 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/552648784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:04.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:04 compute-1 ceph-mon[9795]: pgmap v777: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:59:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:04.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:59:05 compute-1 nova_compute[162974]: 2025-10-09 09:59:05.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:06.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:06 compute-1 ceph-mon[9795]: pgmap v778: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:59:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:07 compute-1 nova_compute[162974]: 2025-10-09 09:59:07.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:08.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:08 compute-1 ceph-mon[9795]: pgmap v779: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 09 09:59:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:59:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:08.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:59:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2385607545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:59:09 compute-1 ceph-mon[9795]: pgmap v780: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 09 09:59:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2240393634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:59:09 compute-1 podman[168437]: 2025-10-09 09:59:09.54508863 +0000 UTC m=+0.054754584 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 09 09:59:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:10.036 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:10.037 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:10.037 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:10.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:10 compute-1 nova_compute[162974]: 2025-10-09 09:59:10.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:10.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:11.124 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:59:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:11.124 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 09:59:11 compute-1 nova_compute[162974]: 2025-10-09 09:59:11.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:12.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:12 compute-1 ceph-mon[9795]: pgmap v781: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Oct 09 09:59:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/282215734' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:59:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/282215734' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:59:12 compute-1 sudo[168461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:59:12 compute-1 sudo[168461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:12 compute-1 sudo[168461]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:12 compute-1 nova_compute[162974]: 2025-10-09 09:59:12.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:12.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:14.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:14 compute-1 ceph-mon[9795]: pgmap v782: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 09 09:59:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:14.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:15.125 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:15 compute-1 sudo[168487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:59:15 compute-1 sudo[168487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:15 compute-1 sudo[168487]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:15 compute-1 sudo[168512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:59:15 compute-1 sudo[168512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:15 compute-1 nova_compute[162974]: 2025-10-09 09:59:15.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:15 compute-1 podman[168594]: 2025-10-09 09:59:15.601165136 +0000 UTC m=+0.038247692 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:59:15 compute-1 podman[168611]: 2025-10-09 09:59:15.733753111 +0000 UTC m=+0.046895039 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 09 09:59:15 compute-1 podman[168594]: 2025-10-09 09:59:15.737060555 +0000 UTC m=+0.174143110 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 09 09:59:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:16 compute-1 podman[168691]: 2025-10-09 09:59:16.025901209 +0000 UTC m=+0.034908697 container exec 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:59:16 compute-1 podman[168691]: 2025-10-09 09:59:16.035850472 +0000 UTC m=+0.044857939 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:59:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:16.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:16 compute-1 ceph-mon[9795]: pgmap v783: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 09 09:59:16 compute-1 podman[168801]: 2025-10-09 09:59:16.359202092 +0000 UTC m=+0.033990016 container exec 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct 09 09:59:16 compute-1 podman[168801]: 2025-10-09 09:59:16.368905301 +0000 UTC m=+0.043693225 container exec_died 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct 09 09:59:16 compute-1 podman[168854]: 2025-10-09 09:59:16.503759737 +0000 UTC m=+0.033578191 container exec 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, build-date=2023-02-22T09:23:20, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, version=2.2.4)
Oct 09 09:59:16 compute-1 podman[168854]: 2025-10-09 09:59:16.515850206 +0000 UTC m=+0.045668659 container exec_died 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, io.openshift.tags=Ceph keepalived, vcs-type=git, architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, vendor=Red Hat, Inc., version=2.2.4, description=keepalived for Ceph, io.openshift.expose-services=, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20)
Oct 09 09:59:16 compute-1 sudo[168512]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:16 compute-1 sudo[168881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:59:16 compute-1 sudo[168881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:16 compute-1 sudo[168881]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:16 compute-1 sudo[168906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:59:16 compute-1 sudo[168906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:16.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:17 compute-1 sudo[168906]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:17 compute-1 ceph-mon[9795]: pgmap v784: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 09 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:59:17 compute-1 nova_compute[162974]: 2025-10-09 09:59:17.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:18.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:18 compute-1 ceph-mon[9795]: pgmap v785: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 86 op/s
Oct 09 09:59:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:18.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:19 compute-1 podman[168962]: 2025-10-09 09:59:19.547761005 +0000 UTC m=+0.053926824 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 09 09:59:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:59:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:20.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:20 compute-1 nova_compute[162974]: 2025-10-09 09:59:20.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:20 compute-1 ceph-mon[9795]: pgmap v786: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 86 op/s
Oct 09 09:59:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:20.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:21 compute-1 sudo[168979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:59:21 compute-1 sudo[168979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:21 compute-1 sudo[168979]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:22.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:22 compute-1 nova_compute[162974]: 2025-10-09 09:59:22.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:22.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:23 compute-1 ceph-mon[9795]: pgmap v787: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 80 op/s
Oct 09 09:59:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:24.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:24.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:25 compute-1 ceph-mon[9795]: pgmap v788: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 80 op/s
Oct 09 09:59:25 compute-1 nova_compute[162974]: 2025-10-09 09:59:25.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:26.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:26 compute-1 ovn_controller[62080]: 2025-10-09T09:59:26Z|00065|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 09 09:59:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:26.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:27 compute-1 ceph-mon[9795]: pgmap v789: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 80 op/s
Oct 09 09:59:27 compute-1 nova_compute[162974]: 2025-10-09 09:59:27.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:28.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:28 compute-1 podman[169008]: 2025-10-09 09:59:28.539737377 +0000 UTC m=+0.046787939 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:59:28 compute-1 podman[169009]: 2025-10-09 09:59:28.544207172 +0000 UTC m=+0.050466572 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:59:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:59:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:28.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:59:29 compute-1 ceph-mon[9795]: pgmap v790: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 316 KiB/s rd, 2.5 MiB/s wr, 71 op/s
Oct 09 09:59:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:30.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:30 compute-1 nova_compute[162974]: 2025-10-09 09:59:30.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:30.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:31 compute-1 ceph-mon[9795]: pgmap v791: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 09 09:59:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:32.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:32 compute-1 sudo[169043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:59:32 compute-1 sudo[169043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:32 compute-1 sudo[169043]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:32 compute-1 nova_compute[162974]: 2025-10-09 09:59:32.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:32.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:33 compute-1 ceph-mon[9795]: pgmap v792: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 09:59:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:34.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:34.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:35 compute-1 ceph-mon[9795]: pgmap v793: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 09:59:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:59:35 compute-1 nova_compute[162974]: 2025-10-09 09:59:35.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:36.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:59:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:36.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:59:37 compute-1 ceph-mon[9795]: pgmap v794: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 09:59:37 compute-1 nova_compute[162974]: 2025-10-09 09:59:37.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:38.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:38.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:39 compute-1 ceph-mon[9795]: pgmap v795: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 09:59:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:40.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:40 compute-1 nova_compute[162974]: 2025-10-09 09:59:40.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:40 compute-1 podman[169072]: 2025-10-09 09:59:40.540500709 +0000 UTC m=+0.052463356 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 09 09:59:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:40.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:41 compute-1 ceph-mon[9795]: pgmap v796: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct 09 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.741 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.741 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.751 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 09 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.803 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.804 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.808 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 09 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.808 2 INFO nova.compute.claims [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Claim successful on node compute-1.ctlplane.example.com
Oct 09 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.870 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:42.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:42 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:59:42 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2641533928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.205 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.208 2 DEBUG nova.compute.provider_tree [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.219 2 DEBUG nova.scheduler.client.report [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.232 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.232 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.268 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.269 2 DEBUG nova.network.neutron [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.287 2 INFO nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 09:59:42 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2641533928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.297 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.361 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.362 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.362 2 INFO nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Creating image(s)
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.379 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.399 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.415 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.418 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.464 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.464 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.465 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.465 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.483 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.485 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.637 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.682 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.743 2 DEBUG nova.objects.instance [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid c7e917a6-1f6f-4739-a31a-bdcfa52bf93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.755 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.755 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Ensure instance console log exists: /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.755 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.756 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.756 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.920 2 DEBUG nova.policy [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 09 09:59:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:42.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:43 compute-1 ceph-mon[9795]: pgmap v797: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 14 KiB/s wr, 2 op/s
Oct 09 09:59:44 compute-1 nova_compute[162974]: 2025-10-09 09:59:44.115 2 DEBUG nova.network.neutron [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Successfully created port: 1687cc87-5c7d-4d91-9386-d985ccc5f55f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 09 09:59:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:44.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:59:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:44.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.030 2 DEBUG nova.network.neutron [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Successfully updated port: 1687cc87-5c7d-4d91-9386-d985ccc5f55f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 09 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.041 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.041 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.041 2 DEBUG nova.network.neutron [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 09 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.079 2 DEBUG nova.compute.manager [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received event network-changed-1687cc87-5c7d-4d91-9386-d985ccc5f55f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.079 2 DEBUG nova.compute.manager [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Refreshing instance network info cache due to event network-changed-1687cc87-5c7d-4d91-9386-d985ccc5f55f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.079 2 DEBUG oslo_concurrency.lockutils [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.132 2 DEBUG nova.network.neutron [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 09 09:59:45 compute-1 ceph-mon[9795]: pgmap v798: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Oct 09 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:46.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.182 2 DEBUG nova.network.neutron [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Updating instance_info_cache with network_info: [{"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.195 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.195 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Instance network_info: |[{"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.195 2 DEBUG oslo_concurrency.lockutils [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.195 2 DEBUG nova.network.neutron [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Refreshing network info cache for port 1687cc87-5c7d-4d91-9386-d985ccc5f55f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.197 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Start _get_guest_xml network_info=[{"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.200 2 WARNING nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.203 2 DEBUG nova.virt.libvirt.host [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.204 2 DEBUG nova.virt.libvirt.host [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.208 2 DEBUG nova.virt.libvirt.host [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.208 2 DEBUG nova.virt.libvirt.host [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.208 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.208 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.209 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.209 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.209 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.209 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.210 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.210 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.210 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.210 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.210 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.211 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.212 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.380475) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986380494, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1556, "num_deletes": 250, "total_data_size": 3944363, "memory_usage": 3993752, "flush_reason": "Manual Compaction"}
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986386338, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1603308, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23077, "largest_seqno": 24628, "table_properties": {"data_size": 1598171, "index_size": 2405, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13335, "raw_average_key_size": 20, "raw_value_size": 1586997, "raw_average_value_size": 2456, "num_data_blocks": 104, "num_entries": 646, "num_filter_entries": 646, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003856, "oldest_key_time": 1760003856, "file_creation_time": 1760003986, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 5890 microseconds, and 3645 cpu microseconds.
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386364) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1603308 bytes OK
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386374) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386857) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386867) EVENT_LOG_v1 {"time_micros": 1760003986386864, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386881) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3937097, prev total WAL file size 3937097, number of live WAL files 2.
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.387492) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1565KB)], [42(13MB)]
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986387514, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 16204471, "oldest_snapshot_seqno": -1}
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5590 keys, 13065615 bytes, temperature: kUnknown
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986422943, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 13065615, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13028786, "index_size": 21743, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14021, "raw_key_size": 140488, "raw_average_key_size": 25, "raw_value_size": 12927883, "raw_average_value_size": 2312, "num_data_blocks": 891, "num_entries": 5590, "num_filter_entries": 5590, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760003986, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.423092) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 13065615 bytes
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.423574) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 456.7 rd, 368.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.9 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(18.3) write-amplify(8.1) OK, records in: 6048, records dropped: 458 output_compression: NoCompression
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.423590) EVENT_LOG_v1 {"time_micros": 1760003986423583, "job": 24, "event": "compaction_finished", "compaction_time_micros": 35478, "compaction_time_cpu_micros": 19544, "output_level": 6, "num_output_files": 1, "total_output_size": 13065615, "num_input_records": 6048, "num_output_records": 5590, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986423849, "job": 24, "event": "table_file_deletion", "file_number": 44}
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986425305, "job": 24, "event": "table_file_deletion", "file_number": 42}
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.387448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.425324) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.425327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.425328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.425329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.425330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:59:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 09:59:46 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3422652302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.572 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.591 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.593 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 09:59:46 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2796860595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.932 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.934 2 DEBUG nova.virt.libvirt.vif [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:59:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1252166476',display_name='tempest-TestNetworkBasicOps-server-1252166476',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1252166476',id=7,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAbgn6SPIFM6AGarUubqFoimfuOdsNeRWX5sq4kHFgr7hG7is5Q/Q8Ek3R1Q0esxFqFL7X0+gBaYCim0P8OY9cMbX9okJGNQoFkk0zy9ycrfeQthKDNu+tA50E3TW/m2Ww==',key_name='tempest-TestNetworkBasicOps-1380098384',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-a0fec6m8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:59:42Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7e917a6-1f6f-4739-a31a-bdcfa52bf93b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.934 2 DEBUG nova.network.os_vif_util [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.935 2 DEBUG nova.network.os_vif_util [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:30:66,bridge_name='br-int',has_traffic_filtering=True,id=1687cc87-5c7d-4d91-9386-d985ccc5f55f,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1687cc87-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.936 2 DEBUG nova.objects.instance [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid c7e917a6-1f6f-4739-a31a-bdcfa52bf93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:59:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:46.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.947 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] End _get_guest_xml xml=<domain type="kvm">
Oct 09 09:59:46 compute-1 nova_compute[162974]:   <uuid>c7e917a6-1f6f-4739-a31a-bdcfa52bf93b</uuid>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   <name>instance-00000007</name>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   <memory>131072</memory>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   <vcpu>1</vcpu>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   <metadata>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <nova:name>tempest-TestNetworkBasicOps-server-1252166476</nova:name>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <nova:creationTime>2025-10-09 09:59:46</nova:creationTime>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <nova:flavor name="m1.nano">
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <nova:memory>128</nova:memory>
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <nova:disk>1</nova:disk>
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <nova:swap>0</nova:swap>
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <nova:vcpus>1</nova:vcpus>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       </nova:flavor>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <nova:owner>
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       </nova:owner>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <nova:ports>
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <nova:port uuid="1687cc87-5c7d-4d91-9386-d985ccc5f55f">
Oct 09 09:59:46 compute-1 nova_compute[162974]:           <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:         </nova:port>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       </nova:ports>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     </nova:instance>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   </metadata>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   <sysinfo type="smbios">
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <system>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <entry name="manufacturer">RDO</entry>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <entry name="product">OpenStack Compute</entry>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <entry name="serial">c7e917a6-1f6f-4739-a31a-bdcfa52bf93b</entry>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <entry name="uuid">c7e917a6-1f6f-4739-a31a-bdcfa52bf93b</entry>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <entry name="family">Virtual Machine</entry>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     </system>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   </sysinfo>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   <os>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <boot dev="hd"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <smbios mode="sysinfo"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   </os>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   <features>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <acpi/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <apic/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <vmcoreinfo/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   </features>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   <clock offset="utc">
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <timer name="hpet" present="no"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   </clock>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   <cpu mode="host-model" match="exact">
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   </cpu>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   <devices>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <disk type="network" device="disk">
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <driver type="raw" cache="none"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <source protocol="rbd" name="vms/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk">
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <host name="192.168.122.100" port="6789"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <host name="192.168.122.102" port="6789"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <host name="192.168.122.101" port="6789"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       </source>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <auth username="openstack">
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       </auth>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <target dev="vda" bus="virtio"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <disk type="network" device="cdrom">
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <driver type="raw" cache="none"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <source protocol="rbd" name="vms/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk.config">
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <host name="192.168.122.100" port="6789"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <host name="192.168.122.102" port="6789"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <host name="192.168.122.101" port="6789"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       </source>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <auth username="openstack">
Oct 09 09:59:46 compute-1 nova_compute[162974]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       </auth>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <target dev="sda" bus="sata"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     </disk>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <interface type="ethernet">
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <mac address="fa:16:3e:18:30:66"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <model type="virtio"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <mtu size="1442"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <target dev="tap1687cc87-5c"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     </interface>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <serial type="pty">
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <log file="/var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/console.log" append="off"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     </serial>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <video>
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <model type="virtio"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     </video>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <input type="tablet" bus="usb"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <rng model="virtio">
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <backend model="random">/dev/urandom</backend>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     </rng>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <controller type="usb" index="0"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     <memballoon model="virtio">
Oct 09 09:59:46 compute-1 nova_compute[162974]:       <stats period="10"/>
Oct 09 09:59:46 compute-1 nova_compute[162974]:     </memballoon>
Oct 09 09:59:46 compute-1 nova_compute[162974]:   </devices>
Oct 09 09:59:46 compute-1 nova_compute[162974]: </domain>
Oct 09 09:59:46 compute-1 nova_compute[162974]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.949 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Preparing to wait for external event network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.949 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.950 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.950 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.950 2 DEBUG nova.virt.libvirt.vif [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:59:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1252166476',display_name='tempest-TestNetworkBasicOps-server-1252166476',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1252166476',id=7,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAbgn6SPIFM6AGarUubqFoimfuOdsNeRWX5sq4kHFgr7hG7is5Q/Q8Ek3R1Q0esxFqFL7X0+gBaYCim0P8OY9cMbX9okJGNQoFkk0zy9ycrfeQthKDNu+tA50E3TW/m2Ww==',key_name='tempest-TestNetworkBasicOps-1380098384',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-a0fec6m8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:59:42Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7e917a6-1f6f-4739-a31a-bdcfa52bf93b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.951 2 DEBUG nova.network.os_vif_util [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.951 2 DEBUG nova.network.os_vif_util [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:30:66,bridge_name='br-int',has_traffic_filtering=True,id=1687cc87-5c7d-4d91-9386-d985ccc5f55f,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1687cc87-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.951 2 DEBUG os_vif [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:30:66,bridge_name='br-int',has_traffic_filtering=True,id=1687cc87-5c7d-4d91-9386-d985ccc5f55f,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1687cc87-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.952 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.953 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.955 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1687cc87-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.955 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1687cc87-5c, col_values=(('external_ids', {'iface-id': '1687cc87-5c7d-4d91-9386-d985ccc5f55f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:30:66', 'vm-uuid': 'c7e917a6-1f6f-4739-a31a-bdcfa52bf93b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:46 compute-1 NetworkManager[982]: <info>  [1760003986.9573] manager: (tap1687cc87-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.962 2 INFO os_vif [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:30:66,bridge_name='br-int',has_traffic_filtering=True,id=1687cc87-5c7d-4d91-9386-d985ccc5f55f,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1687cc87-5c')
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.989 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.989 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.989 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:18:30:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 09 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.990 2 INFO nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Using config drive
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.007 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.225 2 INFO nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Creating config drive at /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/disk.config
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.229 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpieyfwj9p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:47 compute-1 ceph-mon[9795]: pgmap v799: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Oct 09 09:59:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3422652302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:59:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2796860595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.348 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpieyfwj9p" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.368 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.370 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/disk.config c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.450 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/disk.config c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.451 2 INFO nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Deleting local config drive /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/disk.config because it was imported into RBD.
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.480 2 DEBUG nova.network.neutron [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Updated VIF entry in instance network info cache for port 1687cc87-5c7d-4d91-9386-d985ccc5f55f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.480 2 DEBUG nova.network.neutron [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Updating instance_info_cache with network_info: [{"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:59:47 compute-1 kernel: tap1687cc87-5c: entered promiscuous mode
Oct 09 09:59:47 compute-1 NetworkManager[982]: <info>  [1760003987.4855] manager: (tap1687cc87-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:47 compute-1 ovn_controller[62080]: 2025-10-09T09:59:47Z|00066|binding|INFO|Claiming lport 1687cc87-5c7d-4d91-9386-d985ccc5f55f for this chassis.
Oct 09 09:59:47 compute-1 ovn_controller[62080]: 2025-10-09T09:59:47Z|00067|binding|INFO|1687cc87-5c7d-4d91-9386-d985ccc5f55f: Claiming fa:16:3e:18:30:66 10.100.0.22
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.492 2 DEBUG oslo_concurrency.lockutils [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.495 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:30:66 10.100.0.22'], port_security=['fa:16:3e:18:30:66 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'c7e917a6-1f6f-4739-a31a-bdcfa52bf93b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '405a5985-622d-4a01-bebe-dd3a8833c5f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a09146a-9f3c-432d-a7ac-1e34c91ed6bf, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=1687cc87-5c7d-4d91-9386-d985ccc5f55f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.496 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 1687cc87-5c7d-4d91-9386-d985ccc5f55f in datapath 4f792301-cf2d-455d-8ad6-8a55cc3146e9 bound to our chassis
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.497 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f792301-cf2d-455d-8ad6-8a55cc3146e9
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.505 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[ff46ab3c-1667-46fb-9594-45291a3f7aec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.506 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4f792301-c1 in ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.507 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4f792301-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.507 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[ee856aa3-12df-444c-b209-473bc7426bd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.508 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[d251430a-8962-4f32-80df-ed4ef5e7bfa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 systemd-udevd[169423]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:59:47 compute-1 systemd-machined[120683]: New machine qemu-4-instance-00000007.
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.517 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[03497ea6-4399-4957-af71-05f8e9ab2733]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 NetworkManager[982]: <info>  [1760003987.5243] device (tap1687cc87-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:59:47 compute-1 NetworkManager[982]: <info>  [1760003987.5249] device (tap1687cc87-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 09:59:47 compute-1 systemd[1]: Started Virtual Machine qemu-4-instance-00000007.
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.533 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6d281c-ad7f-44df-8211-c9a1b5f900b1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 ovn_controller[62080]: 2025-10-09T09:59:47Z|00068|binding|INFO|Setting lport 1687cc87-5c7d-4d91-9386-d985ccc5f55f ovn-installed in OVS
Oct 09 09:59:47 compute-1 ovn_controller[62080]: 2025-10-09T09:59:47Z|00069|binding|INFO|Setting lport 1687cc87-5c7d-4d91-9386-d985ccc5f55f up in Southbound
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.556 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[fc497323-cb1e-483a-b72e-8720c5acb76d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 systemd-udevd[169426]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:59:47 compute-1 NetworkManager[982]: <info>  [1760003987.5603] manager: (tap4f792301-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.561 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[22fd40c4-eb3c-4e44-a627-53b755554b1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.583 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[9dcd78e6-ab1c-4915-9b3c-75bd1e4b737a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.585 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[491f4249-f741-4872-8ed3-ce887b90a60e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 NetworkManager[982]: <info>  [1760003987.6007] device (tap4f792301-c0): carrier: link connected
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.604 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[cab0e93a-342a-4b35-9a74-6e4e45166a53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.616 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[a79aa631-31d6-486e-ad7b-527ecd5cba74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f792301-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:7e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 167116, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 169446, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.627 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[35907a25-39ac-4f05-b0e7-dbdf41f819e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe27:7e66'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 167116, 'tstamp': 167116}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 169447, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.639 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[84b4e740-0408-4a58-a4fb-ecb7a29bf6b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f792301-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:7e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 167116, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 169448, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.660 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[66bc3cf1-0e5d-4b22-8f08-dd3b0e9e7bee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.699 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[38334e7f-2c8e-4623-8504-8b9da05e19cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.700 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f792301-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.700 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.700 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f792301-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:47 compute-1 NetworkManager[982]: <info>  [1760003987.7029] manager: (tap4f792301-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 09 09:59:47 compute-1 kernel: tap4f792301-c0: entered promiscuous mode
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.706 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f792301-c0, col_values=(('external_ids', {'iface-id': '704a96af-9e0f-4b61-9b53-029cbdc713e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:47 compute-1 ovn_controller[62080]: 2025-10-09T09:59:47Z|00070|binding|INFO|Releasing lport 704a96af-9e0f-4b61-9b53-029cbdc713e8 from this chassis (sb_readonly=0)
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.709 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4f792301-cf2d-455d-8ad6-8a55cc3146e9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4f792301-cf2d-455d-8ad6-8a55cc3146e9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.710 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[167bb61f-16b1-4721-95e1-4da06ccfbb5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.710 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: global
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     log         /dev/log local0 debug
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     log-tag     haproxy-metadata-proxy-4f792301-cf2d-455d-8ad6-8a55cc3146e9
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     user        root
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     group       root
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     maxconn     1024
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     pidfile     /var/lib/neutron/external/pids/4f792301-cf2d-455d-8ad6-8a55cc3146e9.pid.haproxy
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     daemon
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: defaults
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     log global
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     mode http
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     option httplog
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     option dontlognull
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     option http-server-close
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     option forwardfor
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     retries                 3
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     timeout http-request    30s
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     timeout connect         30s
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     timeout client          32s
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     timeout server          32s
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     timeout http-keep-alive 30s
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: listen listener
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     bind 169.254.169.254:80
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:     http-request add-header X-OVN-Network-ID 4f792301-cf2d-455d-8ad6-8a55cc3146e9
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 09 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.710 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'env', 'PROCESS_TAG=haproxy-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4f792301-cf2d-455d-8ad6-8a55cc3146e9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.894 2 DEBUG nova.compute.manager [req-e828d4ab-0fc7-457f-8202-31398ad5fb38 req-23c6e4ef-b1b2-4d30-a32d-294db4adb2f0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received event network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.895 2 DEBUG oslo_concurrency.lockutils [req-e828d4ab-0fc7-457f-8202-31398ad5fb38 req-23c6e4ef-b1b2-4d30-a32d-294db4adb2f0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.895 2 DEBUG oslo_concurrency.lockutils [req-e828d4ab-0fc7-457f-8202-31398ad5fb38 req-23c6e4ef-b1b2-4d30-a32d-294db4adb2f0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.895 2 DEBUG oslo_concurrency.lockutils [req-e828d4ab-0fc7-457f-8202-31398ad5fb38 req-23c6e4ef-b1b2-4d30-a32d-294db4adb2f0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.896 2 DEBUG nova.compute.manager [req-e828d4ab-0fc7-457f-8202-31398ad5fb38 req-23c6e4ef-b1b2-4d30-a32d-294db4adb2f0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Processing event network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 09 09:59:48 compute-1 podman[169518]: 2025-10-09 09:59:48.070421041 +0000 UTC m=+0.046297464 container create 1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct 09 09:59:48 compute-1 systemd[1]: Started libpod-conmon-1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5.scope.
Oct 09 09:59:48 compute-1 systemd[1]: Started libcrun container.
Oct 09 09:59:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002f1b4380ec721ede1d5a9d03dece1813f418232c10bc6615e335bd489013e6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 09:59:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:48.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:48 compute-1 podman[169518]: 2025-10-09 09:59:48.134385323 +0000 UTC m=+0.110261765 container init 1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 09 09:59:48 compute-1 podman[169518]: 2025-10-09 09:59:48.139778458 +0000 UTC m=+0.115654880 container start 1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 09:59:48 compute-1 podman[169518]: 2025-10-09 09:59:48.052639727 +0000 UTC m=+0.028516159 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 09:59:48 compute-1 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169530]: [NOTICE]   (169534) : New worker (169536) forked
Oct 09 09:59:48 compute-1 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169530]: [NOTICE]   (169534) : Loading success.
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.220 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.223 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003988.2228212, c7e917a6-1f6f-4739-a31a-bdcfa52bf93b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.223 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] VM Started (Lifecycle Event)
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.225 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.228 2 INFO nova.virt.libvirt.driver [-] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Instance spawned successfully.
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.228 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.242 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.245 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.249 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.249 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.249 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.250 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.250 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.251 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.268 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.268 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003988.2232885, c7e917a6-1f6f-4739-a31a-bdcfa52bf93b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.268 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] VM Paused (Lifecycle Event)
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.286 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.289 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003988.2233796, c7e917a6-1f6f-4739-a31a-bdcfa52bf93b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.289 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] VM Resumed (Lifecycle Event)
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.295 2 INFO nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Took 5.93 seconds to spawn the instance on the hypervisor.
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.296 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.303 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.305 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.323 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.340 2 INFO nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Took 6.56 seconds to build instance.
Oct 09 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.349 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:48.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:49 compute-1 ceph-mon[9795]: pgmap v800: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 09 09:59:49 compute-1 nova_compute[162974]: 2025-10-09 09:59:49.952 2 DEBUG nova.compute.manager [req-aa8a951d-4d2a-4dc7-a6e3-1547ada4b715 req-e207e064-62d4-494a-92b5-99e687d5e4a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received event network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:59:49 compute-1 nova_compute[162974]: 2025-10-09 09:59:49.952 2 DEBUG oslo_concurrency.lockutils [req-aa8a951d-4d2a-4dc7-a6e3-1547ada4b715 req-e207e064-62d4-494a-92b5-99e687d5e4a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:49 compute-1 nova_compute[162974]: 2025-10-09 09:59:49.952 2 DEBUG oslo_concurrency.lockutils [req-aa8a951d-4d2a-4dc7-a6e3-1547ada4b715 req-e207e064-62d4-494a-92b5-99e687d5e4a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:49 compute-1 nova_compute[162974]: 2025-10-09 09:59:49.952 2 DEBUG oslo_concurrency.lockutils [req-aa8a951d-4d2a-4dc7-a6e3-1547ada4b715 req-e207e064-62d4-494a-92b5-99e687d5e4a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:49 compute-1 nova_compute[162974]: 2025-10-09 09:59:49.953 2 DEBUG nova.compute.manager [req-aa8a951d-4d2a-4dc7-a6e3-1547ada4b715 req-e207e064-62d4-494a-92b5-99e687d5e4a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] No waiting events found dispatching network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:59:49 compute-1 nova_compute[162974]: 2025-10-09 09:59:49.953 2 WARNING nova.compute.manager [req-aa8a951d-4d2a-4dc7-a6e3-1547ada4b715 req-e207e064-62d4-494a-92b5-99e687d5e4a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received unexpected event network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f for instance with vm_state active and task_state None.
Oct 09 09:59:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:50.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:59:50 compute-1 podman[169542]: 2025-10-09 09:59:50.553010745 +0000 UTC m=+0.061697650 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 09 09:59:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:50.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:51 compute-1 nova_compute[162974]: 2025-10-09 09:59:51.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:51 compute-1 ceph-mon[9795]: pgmap v801: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:59:51 compute-1 nova_compute[162974]: 2025-10-09 09:59:51.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.130 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.130 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:52.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:52 compute-1 sudo[169580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:59:52 compute-1 sudo[169580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:52 compute-1 sudo[169580]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:52 compute-1 ceph-mon[9795]: pgmap v802: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 09 09:59:52 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:59:52 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1780233837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.498 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.548 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.548 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.770 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.770 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4873MB free_disk=59.92177200317383GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.771 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.771 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.828 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Instance c7e917a6-1f6f-4739-a31a-bdcfa52bf93b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.828 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.829 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.852 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:59:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:52.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:59:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:59:53 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3096639622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:53 compute-1 nova_compute[162974]: 2025-10-09 09:59:53.228 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:53 compute-1 nova_compute[162974]: 2025-10-09 09:59:53.232 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:59:53 compute-1 nova_compute[162974]: 2025-10-09 09:59:53.243 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:59:53 compute-1 nova_compute[162974]: 2025-10-09 09:59:53.264 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:59:53 compute-1 nova_compute[162974]: 2025-10-09 09:59:53.264 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1780233837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3096639622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:54.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:54 compute-1 ceph-mon[9795]: pgmap v803: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 09:59:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:59:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:54.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:59:55 compute-1 nova_compute[162974]: 2025-10-09 09:59:55.265 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:55 compute-1 nova_compute[162974]: 2025-10-09 09:59:55.265 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:59:55 compute-1 nova_compute[162974]: 2025-10-09 09:59:55.265 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:59:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2606411561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3859217005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:55 compute-1 nova_compute[162974]: 2025-10-09 09:59:55.758 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:59:55 compute-1 nova_compute[162974]: 2025-10-09 09:59:55.758 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquired lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:59:55 compute-1 nova_compute[162974]: 2025-10-09 09:59:55.758 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 09 09:59:55 compute-1 nova_compute[162974]: 2025-10-09 09:59:55.758 2 DEBUG nova.objects.instance [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c7e917a6-1f6f-4739-a31a-bdcfa52bf93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:59:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:56.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:56 compute-1 ceph-mon[9795]: pgmap v804: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 09:59:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1814886035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/136677374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2956790210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:56.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:56 compute-1 nova_compute[162974]: 2025-10-09 09:59:56.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.096 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Updating instance_info_cache with network_info: [{"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.110 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Releasing lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.110 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 09 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.111 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.111 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.111 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:58.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:58 compute-1 ceph-mon[9795]: pgmap v805: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 09 09:59:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 09:59:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:58.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:59 compute-1 podman[169635]: 2025-10-09 09:59:59.551255341 +0000 UTC m=+0.053905053 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct 09 09:59:59 compute-1 podman[169634]: 2025-10-09 09:59:59.573302065 +0000 UTC m=+0.075911642 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 09 10:00:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:00.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:00 compute-1 ovn_controller[62080]: 2025-10-09T10:00:00Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:18:30:66 10.100.0.22
Oct 09 10:00:00 compute-1 ovn_controller[62080]: 2025-10-09T10:00:00Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:18:30:66 10.100.0.22
Oct 09 10:00:00 compute-1 ceph-mon[9795]: pgmap v806: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 75 op/s
Oct 09 10:00:00 compute-1 ceph-mon[9795]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct 09 10:00:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:00.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:01 compute-1 nova_compute[162974]: 2025-10-09 10:00:01.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:02.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:02 compute-1 ceph-mon[9795]: pgmap v807: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 09 10:00:02 compute-1 nova_compute[162974]: 2025-10-09 10:00:02.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:02.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:04.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:04 compute-1 ceph-mon[9795]: pgmap v808: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 10:00:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:04.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:00:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:06.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:06 compute-1 ceph-mon[9795]: pgmap v809: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 10:00:06 compute-1 nova_compute[162974]: 2025-10-09 10:00:06.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:06.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:07 compute-1 nova_compute[162974]: 2025-10-09 10:00:07.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:08.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:08 compute-1 ceph-mon[9795]: pgmap v810: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 09 10:00:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:08.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:10.038 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:10.039 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:10.039 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:00:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:10.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:00:10 compute-1 ceph-mon[9795]: pgmap v811: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 10:00:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:10.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:11 compute-1 systemd[1]: Starting system activity accounting tool...
Oct 09 10:00:11 compute-1 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 10:00:11 compute-1 systemd[1]: Finished system activity accounting tool.
Oct 09 10:00:11 compute-1 podman[169675]: 2025-10-09 10:00:11.578511691 +0000 UTC m=+0.070135163 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 09 10:00:11 compute-1 nova_compute[162974]: 2025-10-09 10:00:11.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:12.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:12 compute-1 sudo[169699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:00:12 compute-1 sudo[169699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:00:12 compute-1 sudo[169699]: pam_unix(sudo:session): session closed for user root
Oct 09 10:00:12 compute-1 ceph-mon[9795]: pgmap v812: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct 09 10:00:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/2285839819' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:00:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/2285839819' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:00:12 compute-1 nova_compute[162974]: 2025-10-09 10:00:12.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:12.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:14.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:14 compute-1 ceph-mon[9795]: pgmap v813: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 21 KiB/s wr, 3 op/s
Oct 09 10:00:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:14.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:16.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:16 compute-1 ceph-mon[9795]: pgmap v814: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 21 KiB/s wr, 3 op/s
Oct 09 10:00:16 compute-1 nova_compute[162974]: 2025-10-09 10:00:16.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:16.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:17 compute-1 nova_compute[162974]: 2025-10-09 10:00:17.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:00:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:18.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:00:18 compute-1 ceph-mon[9795]: pgmap v815: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 27 KiB/s wr, 4 op/s
Oct 09 10:00:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:18.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:20.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:20 compute-1 ceph-mon[9795]: pgmap v816: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 2 op/s
Oct 09 10:00:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:00:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:20.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.394548) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021394609, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 589, "num_deletes": 257, "total_data_size": 926082, "memory_usage": 939096, "flush_reason": "Manual Compaction"}
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021398488, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 609209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24633, "largest_seqno": 25217, "table_properties": {"data_size": 606288, "index_size": 893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6684, "raw_average_key_size": 17, "raw_value_size": 600364, "raw_average_value_size": 1592, "num_data_blocks": 41, "num_entries": 377, "num_filter_entries": 377, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003987, "oldest_key_time": 1760003987, "file_creation_time": 1760004021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 3955 microseconds, and 3238 cpu microseconds.
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398516) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 609209 bytes OK
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398529) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398920) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398934) EVENT_LOG_v1 {"time_micros": 1760004021398930, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398956) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 922698, prev total WAL file size 922698, number of live WAL files 2.
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.399271) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353035' seq:0, type:0; will stop at (end)
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(594KB)], [45(12MB)]
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021399301, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 13674824, "oldest_snapshot_seqno": -1}
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5445 keys, 13531452 bytes, temperature: kUnknown
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021436576, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 13531452, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13494739, "index_size": 22011, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 138681, "raw_average_key_size": 25, "raw_value_size": 13395532, "raw_average_value_size": 2460, "num_data_blocks": 899, "num_entries": 5445, "num_filter_entries": 5445, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760004021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437023) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13531452 bytes
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 364.2 rd, 360.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.5 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(44.7) write-amplify(22.2) OK, records in: 5967, records dropped: 522 output_compression: NoCompression
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437585) EVENT_LOG_v1 {"time_micros": 1760004021437577, "job": 26, "event": "compaction_finished", "compaction_time_micros": 37550, "compaction_time_cpu_micros": 23481, "output_level": 6, "num_output_files": 1, "total_output_size": 13531452, "num_input_records": 5967, "num_output_records": 5445, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021438149, "job": 26, "event": "table_file_deletion", "file_number": 47}
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021440502, "job": 26, "event": "table_file_deletion", "file_number": 45}
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.399233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.440731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.440736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.440737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.440739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.440740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:00:21 compute-1 sudo[169729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:00:21 compute-1 sudo[169729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:00:21 compute-1 sudo[169729]: pam_unix(sudo:session): session closed for user root
Oct 09 10:00:21 compute-1 sudo[169760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:00:21 compute-1 podman[169753]: 2025-10-09 10:00:21.527202854 +0000 UTC m=+0.050733695 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 09 10:00:21 compute-1 sudo[169760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:00:21 compute-1 nova_compute[162974]: 2025-10-09 10:00:21.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:22 compute-1 sudo[169760]: pam_unix(sudo:session): session closed for user root
Oct 09 10:00:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:22.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:22 compute-1 ceph-mon[9795]: pgmap v817: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 18 KiB/s wr, 3 op/s
Oct 09 10:00:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:00:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:00:22 compute-1 ceph-mon[9795]: pgmap v818: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 9.8 KiB/s wr, 2 op/s
Oct 09 10:00:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:00:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:00:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:00:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:00:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:00:22 compute-1 nova_compute[162974]: 2025-10-09 10:00:22.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:22.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:24.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:24.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:25 compute-1 ceph-mon[9795]: pgmap v819: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 9.8 KiB/s wr, 2 op/s
Oct 09 10:00:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:25 compute-1 sudo[169827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:00:25 compute-1 sudo[169827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:00:25 compute-1 sudo[169827]: pam_unix(sudo:session): session closed for user root
Oct 09 10:00:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.002000018s ======
Oct 09 10:00:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:26.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000018s
Oct 09 10:00:26 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:00:26 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:00:26 compute-1 ceph-mon[9795]: pgmap v820: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 11 KiB/s wr, 2 op/s
Oct 09 10:00:26 compute-1 nova_compute[162974]: 2025-10-09 10:00:26.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:00:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:26.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:00:27 compute-1 nova_compute[162974]: 2025-10-09 10:00:27.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:28.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:28.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:29 compute-1 ceph-mon[9795]: pgmap v821: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 4.5 KiB/s wr, 1 op/s
Oct 09 10:00:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:30.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:30 compute-1 podman[169854]: 2025-10-09 10:00:30.540438252 +0000 UTC m=+0.042933747 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:00:30 compute-1 podman[169855]: 2025-10-09 10:00:30.548070455 +0000 UTC m=+0.048955996 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:00:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:31.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:31 compute-1 ceph-mon[9795]: pgmap v822: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 4.5 KiB/s wr, 1 op/s
Oct 09 10:00:31 compute-1 nova_compute[162974]: 2025-10-09 10:00:31.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:00:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:32.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:00:32 compute-1 sudo[169888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:00:32 compute-1 sudo[169888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:00:32 compute-1 sudo[169888]: pam_unix(sudo:session): session closed for user root
Oct 09 10:00:32 compute-1 nova_compute[162974]: 2025-10-09 10:00:32.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:33.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:33 compute-1 ceph-mon[9795]: pgmap v823: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 4.9 KiB/s wr, 2 op/s
Oct 09 10:00:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:34.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:35.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:35 compute-1 ceph-mon[9795]: pgmap v824: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.3 KiB/s wr, 1 op/s
Oct 09 10:00:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:00:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:36.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:36 compute-1 nova_compute[162974]: 2025-10-09 10:00:36.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:37.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:37 compute-1 ceph-mon[9795]: pgmap v825: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 6.3 KiB/s wr, 2 op/s
Oct 09 10:00:37 compute-1 nova_compute[162974]: 2025-10-09 10:00:37.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:38.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:38 compute-1 ovn_controller[62080]: 2025-10-09T10:00:38Z|00071|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Oct 09 10:00:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:39.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:39 compute-1 ceph-mon[9795]: pgmap v826: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 5.3 KiB/s wr, 1 op/s
Oct 09 10:00:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:40.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:41.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:41 compute-1 ceph-mon[9795]: pgmap v827: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 5.3 KiB/s wr, 1 op/s
Oct 09 10:00:41 compute-1 nova_compute[162974]: 2025-10-09 10:00:41.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:42.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:42 compute-1 podman[169918]: 2025-10-09 10:00:42.570881256 +0000 UTC m=+0.070888647 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller)
Oct 09 10:00:42 compute-1 nova_compute[162974]: 2025-10-09 10:00:42.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:00:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:43.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:00:43 compute-1 ceph-mon[9795]: pgmap v828: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 6.3 KiB/s wr, 2 op/s
Oct 09 10:00:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:44.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:45.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:45 compute-1 ceph-mon[9795]: pgmap v829: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s
Oct 09 10:00:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:00:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:46.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.628 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.628 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.628 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.628 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.629 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.630 2 INFO nova.compute.manager [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Terminating instance
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.631 2 DEBUG nova.compute.manager [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 09 10:00:46 compute-1 kernel: tap1687cc87-5c (unregistering): left promiscuous mode
Oct 09 10:00:46 compute-1 NetworkManager[982]: <info>  [1760004046.6733] device (tap1687cc87-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 10:00:46 compute-1 ovn_controller[62080]: 2025-10-09T10:00:46Z|00072|binding|INFO|Releasing lport 1687cc87-5c7d-4d91-9386-d985ccc5f55f from this chassis (sb_readonly=0)
Oct 09 10:00:46 compute-1 ovn_controller[62080]: 2025-10-09T10:00:46Z|00073|binding|INFO|Setting lport 1687cc87-5c7d-4d91-9386-d985ccc5f55f down in Southbound
Oct 09 10:00:46 compute-1 ovn_controller[62080]: 2025-10-09T10:00:46Z|00074|binding|INFO|Removing iface tap1687cc87-5c ovn-installed in OVS
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.691 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:30:66 10.100.0.22'], port_security=['fa:16:3e:18:30:66 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'c7e917a6-1f6f-4739-a31a-bdcfa52bf93b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '405a5985-622d-4a01-bebe-dd3a8833c5f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a09146a-9f3c-432d-a7ac-1e34c91ed6bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=1687cc87-5c7d-4d91-9386-d985ccc5f55f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.692 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 1687cc87-5c7d-4d91-9386-d985ccc5f55f in datapath 4f792301-cf2d-455d-8ad6-8a55cc3146e9 unbound from our chassis
Oct 09 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.693 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4f792301-cf2d-455d-8ad6-8a55cc3146e9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 09 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.693 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[83093879-7618-486a-820d-c52da400738e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.694 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 namespace which is not needed anymore
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:46 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 09 10:00:46 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000007.scope: Consumed 12.306s CPU time.
Oct 09 10:00:46 compute-1 systemd-machined[120683]: Machine qemu-4-instance-00000007 terminated.
Oct 09 10:00:46 compute-1 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169530]: [NOTICE]   (169534) : haproxy version is 2.8.14-c23fe91
Oct 09 10:00:46 compute-1 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169530]: [NOTICE]   (169534) : path to executable is /usr/sbin/haproxy
Oct 09 10:00:46 compute-1 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169530]: [ALERT]    (169534) : Current worker (169536) exited with code 143 (Terminated)
Oct 09 10:00:46 compute-1 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169530]: [WARNING]  (169534) : All workers exited. Exiting... (0)
Oct 09 10:00:46 compute-1 systemd[1]: libpod-1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5.scope: Deactivated successfully.
Oct 09 10:00:46 compute-1 podman[169964]: 2025-10-09 10:00:46.805963231 +0000 UTC m=+0.039075898 container died 1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 09 10:00:46 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5-userdata-shm.mount: Deactivated successfully.
Oct 09 10:00:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-002f1b4380ec721ede1d5a9d03dece1813f418232c10bc6615e335bd489013e6-merged.mount: Deactivated successfully.
Oct 09 10:00:46 compute-1 podman[169964]: 2025-10-09 10:00:46.833512396 +0000 UTC m=+0.066625062 container cleanup 1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 09 10:00:46 compute-1 kernel: tap1687cc87-5c: entered promiscuous mode
Oct 09 10:00:46 compute-1 systemd-udevd[169949]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 10:00:46 compute-1 NetworkManager[982]: <info>  [1760004046.8501] manager: (tap1687cc87-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Oct 09 10:00:46 compute-1 kernel: tap1687cc87-5c (unregistering): left promiscuous mode
Oct 09 10:00:46 compute-1 systemd[1]: libpod-conmon-1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5.scope: Deactivated successfully.
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.866 2 INFO nova.virt.libvirt.driver [-] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Instance destroyed successfully.
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.866 2 DEBUG nova.objects.instance [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid c7e917a6-1f6f-4739-a31a-bdcfa52bf93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.880 2 DEBUG nova.virt.libvirt.vif [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:59:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1252166476',display_name='tempest-TestNetworkBasicOps-server-1252166476',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1252166476',id=7,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAbgn6SPIFM6AGarUubqFoimfuOdsNeRWX5sq4kHFgr7hG7is5Q/Q8Ek3R1Q0esxFqFL7X0+gBaYCim0P8OY9cMbX9okJGNQoFkk0zy9ycrfeQthKDNu+tA50E3TW/m2Ww==',key_name='tempest-TestNetworkBasicOps-1380098384',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:59:48Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-a0fec6m8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:59:48Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7e917a6-1f6f-4739-a31a-bdcfa52bf93b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.884 2 DEBUG nova.network.os_vif_util [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.885 2 DEBUG nova.network.os_vif_util [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:30:66,bridge_name='br-int',has_traffic_filtering=True,id=1687cc87-5c7d-4d91-9386-d985ccc5f55f,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1687cc87-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.887 2 DEBUG os_vif [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:30:66,bridge_name='br-int',has_traffic_filtering=True,id=1687cc87-5c7d-4d91-9386-d985ccc5f55f,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1687cc87-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1687cc87-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:46 compute-1 podman[169988]: 2025-10-09 10:00:46.893579309 +0000 UTC m=+0.036669002 container remove 1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.896 2 INFO os_vif [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:30:66,bridge_name='br-int',has_traffic_filtering=True,id=1687cc87-5c7d-4d91-9386-d985ccc5f55f,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1687cc87-5c')
Oct 09 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.899 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[5efc2e3e-b34c-407b-84c4-164b8d86ce27]: (4, ('Thu Oct  9 10:00:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 (1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5)\n1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5\nThu Oct  9 10:00:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 (1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5)\n1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.901 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[de2c074e-d004-4748-adbc-e5e1cb7e9a30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.902 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f792301-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:00:46 compute-1 kernel: tap4f792301-c0: left promiscuous mode
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.919 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[9491962c-9b0b-4b90-922f-48961c557f0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.941 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[985a402c-92f6-46b9-a92a-2ac36c0cbf92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.941 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[24419224-438b-46c5-b656-caad86b2461b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.956 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ca7a31-20cf-4dd4-97a2-0ba373318752]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 167112, 'reachable_time': 33951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 170023, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.959 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 09 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.959 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[9d36cadf-3c65-43c2-8f04-11dd0d5a36d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:46 compute-1 systemd[1]: run-netns-ovnmeta\x2d4f792301\x2dcf2d\x2d455d\x2d8ad6\x2d8a55cc3146e9.mount: Deactivated successfully.
Oct 09 10:00:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:47.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.064 2 INFO nova.virt.libvirt.driver [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Deleting instance files /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_del
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.064 2 INFO nova.virt.libvirt.driver [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Deletion of /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_del complete
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.101 2 INFO nova.compute.manager [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Took 0.47 seconds to destroy the instance on the hypervisor.
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.104 2 DEBUG oslo.service.loopingcall [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.104 2 DEBUG nova.compute.manager [-] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.104 2 DEBUG nova.network.neutron [-] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 09 10:00:47 compute-1 ceph-mon[9795]: pgmap v830: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 4.7 KiB/s wr, 2 op/s
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.283 2 DEBUG nova.compute.manager [req-9360a9d0-d976-4a12-b053-7df5b7dc5c17 req-f285397e-de21-4eae-bbf5-fbf81d36c756 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received event network-vif-unplugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.283 2 DEBUG oslo_concurrency.lockutils [req-9360a9d0-d976-4a12-b053-7df5b7dc5c17 req-f285397e-de21-4eae-bbf5-fbf81d36c756 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.284 2 DEBUG oslo_concurrency.lockutils [req-9360a9d0-d976-4a12-b053-7df5b7dc5c17 req-f285397e-de21-4eae-bbf5-fbf81d36c756 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.284 2 DEBUG oslo_concurrency.lockutils [req-9360a9d0-d976-4a12-b053-7df5b7dc5c17 req-f285397e-de21-4eae-bbf5-fbf81d36c756 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.284 2 DEBUG nova.compute.manager [req-9360a9d0-d976-4a12-b053-7df5b7dc5c17 req-f285397e-de21-4eae-bbf5-fbf81d36c756 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] No waiting events found dispatching network-vif-unplugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.284 2 DEBUG nova.compute.manager [req-9360a9d0-d976-4a12-b053-7df5b7dc5c17 req-f285397e-de21-4eae-bbf5-fbf81d36c756 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received event network-vif-unplugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:47.451 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:00:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:47.452 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.556 2 DEBUG nova.network.neutron [-] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.565 2 INFO nova.compute.manager [-] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Took 0.46 seconds to deallocate network for instance.
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.592 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.593 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.627 2 DEBUG oslo_concurrency.processutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:00:47 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3430617062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.998 2 DEBUG oslo_concurrency.processutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:00:48 compute-1 nova_compute[162974]: 2025-10-09 10:00:48.002 2 DEBUG nova.compute.provider_tree [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:00:48 compute-1 nova_compute[162974]: 2025-10-09 10:00:48.014 2 DEBUG nova.scheduler.client.report [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:00:48 compute-1 nova_compute[162974]: 2025-10-09 10:00:48.029 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:48 compute-1 nova_compute[162974]: 2025-10-09 10:00:48.046 2 INFO nova.scheduler.client.report [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance c7e917a6-1f6f-4739-a31a-bdcfa52bf93b
Oct 09 10:00:48 compute-1 nova_compute[162974]: 2025-10-09 10:00:48.088 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:48 compute-1 nova_compute[162974]: 2025-10-09 10:00:48.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:48 compute-1 nova_compute[162974]: 2025-10-09 10:00:48.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 09 10:00:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3430617062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:48.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:49.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:49 compute-1 ceph-mon[9795]: pgmap v831: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 2.7 KiB/s wr, 1 op/s
Oct 09 10:00:49 compute-1 nova_compute[162974]: 2025-10-09 10:00:49.362 2 DEBUG nova.compute.manager [req-6f4ec08a-a57e-4550-8549-9826d2643ad2 req-624b7f1d-d86f-4166-8d5f-fe1af0272d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received event network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:00:49 compute-1 nova_compute[162974]: 2025-10-09 10:00:49.362 2 DEBUG oslo_concurrency.lockutils [req-6f4ec08a-a57e-4550-8549-9826d2643ad2 req-624b7f1d-d86f-4166-8d5f-fe1af0272d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:49 compute-1 nova_compute[162974]: 2025-10-09 10:00:49.362 2 DEBUG oslo_concurrency.lockutils [req-6f4ec08a-a57e-4550-8549-9826d2643ad2 req-624b7f1d-d86f-4166-8d5f-fe1af0272d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:49 compute-1 nova_compute[162974]: 2025-10-09 10:00:49.363 2 DEBUG oslo_concurrency.lockutils [req-6f4ec08a-a57e-4550-8549-9826d2643ad2 req-624b7f1d-d86f-4166-8d5f-fe1af0272d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:49 compute-1 nova_compute[162974]: 2025-10-09 10:00:49.363 2 DEBUG nova.compute.manager [req-6f4ec08a-a57e-4550-8549-9826d2643ad2 req-624b7f1d-d86f-4166-8d5f-fe1af0272d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] No waiting events found dispatching network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:00:49 compute-1 nova_compute[162974]: 2025-10-09 10:00:49.363 2 WARNING nova.compute.manager [req-6f4ec08a-a57e-4550-8549-9826d2643ad2 req-624b7f1d-d86f-4166-8d5f-fe1af0272d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received unexpected event network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f for instance with vm_state deleted and task_state None.
Oct 09 10:00:49 compute-1 nova_compute[162974]: 2025-10-09 10:00:49.363 2 DEBUG nova.compute.manager [req-6f4ec08a-a57e-4550-8549-9826d2643ad2 req-624b7f1d-d86f-4166-8d5f-fe1af0272d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received event network-vif-deleted-1687cc87-5c7d-4d91-9386-d985ccc5f55f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:00:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:50.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:00:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:51.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:51 compute-1 nova_compute[162974]: 2025-10-09 10:00:51.117 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:51 compute-1 nova_compute[162974]: 2025-10-09 10:00:51.130 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:51 compute-1 nova_compute[162974]: 2025-10-09 10:00:51.130 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 09 10:00:51 compute-1 nova_compute[162974]: 2025-10-09 10:00:51.142 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 09 10:00:51 compute-1 ceph-mon[9795]: pgmap v832: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 2.7 KiB/s wr, 1 op/s
Oct 09 10:00:51 compute-1 nova_compute[162974]: 2025-10-09 10:00:51.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:51 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:51.454 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:00:51 compute-1 nova_compute[162974]: 2025-10-09 10:00:51.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.143 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.143 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.143 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.143 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.143 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:00:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:52.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:52 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:00:52 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/743125290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:52 compute-1 sudo[170070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:00:52 compute-1 sudo[170070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:00:52 compute-1 sudo[170070]: pam_unix(sudo:session): session closed for user root
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.492 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:00:52 compute-1 podman[170093]: 2025-10-09 10:00:52.565306416 +0000 UTC m=+0.075658134 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.705 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.706 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5058MB free_disk=59.942501068115234GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.706 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.706 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.753 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.753 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.844 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Refreshing inventories for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.901 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Updating ProviderTree inventory for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.901 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Updating inventory in ProviderTree for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.917 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Refreshing aggregate associations for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.935 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Refreshing trait associations for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a, traits: HW_CPU_X86_AESNI,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX512VAES,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 09 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.952 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:00:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:53.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:53 compute-1 ceph-mon[9795]: pgmap v833: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.8 KiB/s wr, 29 op/s
Oct 09 10:00:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/743125290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:00:53 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3841926323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:53 compute-1 nova_compute[162974]: 2025-10-09 10:00:53.301 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:00:53 compute-1 nova_compute[162974]: 2025-10-09 10:00:53.305 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:00:53 compute-1 nova_compute[162974]: 2025-10-09 10:00:53.316 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:00:53 compute-1 nova_compute[162974]: 2025-10-09 10:00:53.330 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:00:53 compute-1 nova_compute[162974]: 2025-10-09 10:00:53.330 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:53 compute-1 nova_compute[162974]: 2025-10-09 10:00:53.330 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:54.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3841926323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1311931696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:54 compute-1 nova_compute[162974]: 2025-10-09 10:00:54.338 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:54 compute-1 nova_compute[162974]: 2025-10-09 10:00:54.339 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:54 compute-1 nova_compute[162974]: 2025-10-09 10:00:54.339 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:54 compute-1 nova_compute[162974]: 2025-10-09 10:00:54.340 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:00:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:55.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:55 compute-1 nova_compute[162974]: 2025-10-09 10:00:55.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:55 compute-1 nova_compute[162974]: 2025-10-09 10:00:55.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:00:55 compute-1 nova_compute[162974]: 2025-10-09 10:00:55.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:00:55 compute-1 nova_compute[162974]: 2025-10-09 10:00:55.132 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:00:55 compute-1 nova_compute[162974]: 2025-10-09 10:00:55.132 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:55 compute-1 nova_compute[162974]: 2025-10-09 10:00:55.132 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:55 compute-1 nova_compute[162974]: 2025-10-09 10:00:55.132 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:55 compute-1 ceph-mon[9795]: pgmap v834: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct 09 10:00:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:56.180 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:89:5b'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed655dd9-bb73-453e-8a8b-a0dd965263b3, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=188102c6-f5ba-4733-92be-2659db7ae55a) old=Port_Binding(mac=['fa:16:3e:77:89:5b 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:00:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:56.181 71059 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 188102c6-f5ba-4733-92be-2659db7ae55a in datapath ab21f371-26e2-4c4f-bba0-3c44fb308723 updated
Oct 09 10:00:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:56.182 71059 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ab21f371-26e2-4c4f-bba0-3c44fb308723 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 09 10:00:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:56.183 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6621b993-f94c-4cec-842b-6a089c2773a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:56.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3252183875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:56 compute-1 nova_compute[162974]: 2025-10-09 10:00:56.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:57.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:57 compute-1 ceph-mon[9795]: pgmap v835: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.0 KiB/s wr, 57 op/s
Oct 09 10:00:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3594715963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2786803058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2722941605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:57 compute-1 nova_compute[162974]: 2025-10-09 10:00:57.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:58.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:00:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:59.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:59 compute-1 ceph-mon[9795]: pgmap v836: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.3 KiB/s wr, 56 op/s
Oct 09 10:01:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:00.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:01.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:01 compute-1 CROND[170142]: (root) CMD (run-parts /etc/cron.hourly)
Oct 09 10:01:01 compute-1 run-parts[170145]: (/etc/cron.hourly) starting 0anacron
Oct 09 10:01:01 compute-1 run-parts[170151]: (/etc/cron.hourly) finished 0anacron
Oct 09 10:01:01 compute-1 CROND[170141]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 09 10:01:01 compute-1 ceph-mon[9795]: pgmap v837: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.3 KiB/s wr, 56 op/s
Oct 09 10:01:01 compute-1 podman[170153]: 2025-10-09 10:01:01.53733866 +0000 UTC m=+0.044814541 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 09 10:01:01 compute-1 podman[170154]: 2025-10-09 10:01:01.542345175 +0000 UTC m=+0.048962528 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3)
Oct 09 10:01:01 compute-1 nova_compute[162974]: 2025-10-09 10:01:01.863 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760004046.8596463, c7e917a6-1f6f-4739-a31a-bdcfa52bf93b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 10:01:01 compute-1 nova_compute[162974]: 2025-10-09 10:01:01.864 2 INFO nova.compute.manager [-] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] VM Stopped (Lifecycle Event)
Oct 09 10:01:01 compute-1 nova_compute[162974]: 2025-10-09 10:01:01.879 2 DEBUG nova.compute.manager [None req-17acc6b4-2175-4760-943c-f00de7baeb42 - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:01:01 compute-1 nova_compute[162974]: 2025-10-09 10:01:01.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:02.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:02 compute-1 nova_compute[162974]: 2025-10-09 10:01:02.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:03.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:03 compute-1 ceph-mon[9795]: pgmap v838: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.3 KiB/s wr, 57 op/s
Oct 09 10:01:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:04.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:05.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:05 compute-1 ceph-mon[9795]: pgmap v839: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 09 10:01:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:01:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:06.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:06 compute-1 nova_compute[162974]: 2025-10-09 10:01:06.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:07.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:07 compute-1 ceph-mon[9795]: pgmap v840: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct 09 10:01:07 compute-1 nova_compute[162974]: 2025-10-09 10:01:07.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:01:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:08.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.309043) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068309075, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 721, "num_deletes": 251, "total_data_size": 1416992, "memory_usage": 1444256, "flush_reason": "Manual Compaction"}
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068312470, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 932885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25222, "largest_seqno": 25938, "table_properties": {"data_size": 929346, "index_size": 1383, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8231, "raw_average_key_size": 19, "raw_value_size": 922210, "raw_average_value_size": 2190, "num_data_blocks": 61, "num_entries": 421, "num_filter_entries": 421, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004022, "oldest_key_time": 1760004022, "file_creation_time": 1760004068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 3446 microseconds, and 2330 cpu microseconds.
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312494) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 932885 bytes OK
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312505) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312838) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312849) EVENT_LOG_v1 {"time_micros": 1760004068312846, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312857) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1413098, prev total WAL file size 1413098, number of live WAL files 2.
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.313251) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(911KB)], [48(12MB)]
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068313290, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 14464337, "oldest_snapshot_seqno": -1}
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5350 keys, 12340049 bytes, temperature: kUnknown
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068346148, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 12340049, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12304964, "index_size": 20639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 137400, "raw_average_key_size": 25, "raw_value_size": 12208343, "raw_average_value_size": 2281, "num_data_blocks": 837, "num_entries": 5350, "num_filter_entries": 5350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760004068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.346451) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12340049 bytes
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347657) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 437.6 rd, 373.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.9 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(28.7) write-amplify(13.2) OK, records in: 5866, records dropped: 516 output_compression: NoCompression
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347672) EVENT_LOG_v1 {"time_micros": 1760004068347665, "job": 28, "event": "compaction_finished", "compaction_time_micros": 33052, "compaction_time_cpu_micros": 19656, "output_level": 6, "num_output_files": 1, "total_output_size": 12340049, "num_input_records": 5866, "num_output_records": 5350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068348281, "job": 28, "event": "table_file_deletion", "file_number": 50}
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068350230, "job": 28, "event": "table_file_deletion", "file_number": 48}
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.313172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:01:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:09.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:09 compute-1 ceph-mon[9795]: pgmap v841: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:01:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2949658230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:10.039 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:10.040 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:10.040 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:10.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:11.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:11 compute-1 ceph-mon[9795]: pgmap v842: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:01:11 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4140615067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:01:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 09 10:01:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2910271570' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:01:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 09 10:01:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2910271570' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:01:11 compute-1 nova_compute[162974]: 2025-10-09 10:01:11.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:12.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1742209419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:01:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/2910271570' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:01:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/2910271570' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:01:12 compute-1 sudo[170191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:01:12 compute-1 sudo[170191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:01:12 compute-1 sudo[170191]: pam_unix(sudo:session): session closed for user root
Oct 09 10:01:12 compute-1 nova_compute[162974]: 2025-10-09 10:01:12.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:13.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:13 compute-1 ceph-mon[9795]: pgmap v843: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 10:01:13 compute-1 podman[170217]: 2025-10-09 10:01:13.555030617 +0000 UTC m=+0.060470324 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 10:01:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:14.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:14 compute-1 ceph-mon[9795]: pgmap v844: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 10:01:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:01:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:15.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:01:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:16.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:16 compute-1 nova_compute[162974]: 2025-10-09 10:01:16.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:01:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:17.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:01:17 compute-1 ceph-mon[9795]: pgmap v845: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 10:01:17 compute-1 nova_compute[162974]: 2025-10-09 10:01:17.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:18.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:19.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:19 compute-1 ceph-mon[9795]: pgmap v846: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 10:01:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:01:20 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2235098991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:20.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:01:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:21.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:01:21 compute-1 ceph-mon[9795]: pgmap v847: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 10:01:21 compute-1 nova_compute[162974]: 2025-10-09 10:01:21.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:22.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:22 compute-1 nova_compute[162974]: 2025-10-09 10:01:22.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:23.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:23 compute-1 ceph-mon[9795]: pgmap v848: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 09 10:01:23 compute-1 podman[170245]: 2025-10-09 10:01:23.525208497 +0000 UTC m=+0.037272377 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:01:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:24.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:25.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:25 compute-1 ceph-mon[9795]: pgmap v849: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 09 10:01:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:26 compute-1 sudo[170263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:01:26 compute-1 sudo[170263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:01:26 compute-1 sudo[170263]: pam_unix(sudo:session): session closed for user root
Oct 09 10:01:26 compute-1 sudo[170288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:01:26 compute-1 sudo[170288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:01:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:26.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:26 compute-1 sudo[170288]: pam_unix(sudo:session): session closed for user root
Oct 09 10:01:26 compute-1 nova_compute[162974]: 2025-10-09 10:01:26.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:27.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:27 compute-1 ceph-mon[9795]: pgmap v850: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 09 10:01:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:01:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:01:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:01:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:01:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:01:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:01:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.822 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.823 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.833 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 09 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.880 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.880 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.885 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 09 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.885 2 INFO nova.compute.claims [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Claim successful on node compute-1.ctlplane.example.com
Oct 09 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.951 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:01:28 compute-1 ceph-mon[9795]: pgmap v851: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Oct 09 10:01:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:28.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:28 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:01:28 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1357456667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.291 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.295 2 DEBUG nova.compute.provider_tree [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.306 2 DEBUG nova.scheduler.client.report [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.319 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.320 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.352 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.352 2 DEBUG nova.network.neutron [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.363 2 INFO nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.373 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.441 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.442 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.442 2 INFO nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Creating image(s)
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.461 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.478 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.496 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.499 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.547 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.547 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.548 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.548 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.566 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.568 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.705 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.746 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.753 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.776 2 WARNING nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.776 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Triggering sync for uuid 0fde2924-0ac7-4ea2-b42d-290df3f52929 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.776 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.810 2 DEBUG nova.objects.instance [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid 0fde2924-0ac7-4ea2-b42d-290df3f52929 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.819 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.819 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Ensure instance console log exists: /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.820 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.820 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.820 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:29.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:29 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1357456667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.228 2 DEBUG nova.policy [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 09 10:01:29 compute-1 ovn_controller[62080]: 2025-10-09T10:01:29Z|00075|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Oct 09 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.822 2 DEBUG nova.network.neutron [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Successfully updated port: 24c642bf-d3e7-4003-97f5-0e43aca6db7b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 09 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.833 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-0fde2924-0ac7-4ea2-b42d-290df3f52929" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.833 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-0fde2924-0ac7-4ea2-b42d-290df3f52929" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.833 2 DEBUG nova.network.neutron [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 09 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.883 2 DEBUG nova.compute.manager [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received event network-changed-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.883 2 DEBUG nova.compute.manager [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Refreshing instance network info cache due to event network-changed-24c642bf-d3e7-4003-97f5-0e43aca6db7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.884 2 DEBUG oslo_concurrency.lockutils [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-0fde2924-0ac7-4ea2-b42d-290df3f52929" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.949 2 DEBUG nova.network.neutron [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 09 10:01:30 compute-1 ceph-mon[9795]: pgmap v852: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Oct 09 10:01:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:30.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:30 compute-1 sudo[170532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:01:30 compute-1 sudo[170532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:01:30 compute-1 sudo[170532]: pam_unix(sudo:session): session closed for user root
Oct 09 10:01:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.924 2 DEBUG nova.network.neutron [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Updating instance_info_cache with network_info: [{"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.942 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-0fde2924-0ac7-4ea2-b42d-290df3f52929" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.942 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Instance network_info: |[{"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.942 2 DEBUG oslo_concurrency.lockutils [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-0fde2924-0ac7-4ea2-b42d-290df3f52929" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.943 2 DEBUG nova.network.neutron [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Refreshing network info cache for port 24c642bf-d3e7-4003-97f5-0e43aca6db7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.945 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Start _get_guest_xml network_info=[{"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.949 2 WARNING nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.952 2 DEBUG nova.virt.libvirt.host [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.953 2 DEBUG nova.virt.libvirt.host [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.957 2 DEBUG nova.virt.libvirt.host [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.957 2 DEBUG nova.virt.libvirt.host [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.958 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.958 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.958 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.959 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.959 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.959 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.959 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.959 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.960 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.960 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.960 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.960 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 09 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.962 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:01:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.003000028s ======
Oct 09 10:01:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:31.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000028s
Oct 09 10:01:31 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 10:01:31 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1905495853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.321 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.345 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.349 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:01:31 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:01:31 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:01:31 compute-1 ceph-mon[9795]: pgmap v853: 337 pgs: 337 active+clean; 55 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 778 KiB/s wr, 44 op/s
Oct 09 10:01:31 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1905495853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.563 2 DEBUG nova.network.neutron [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Updated VIF entry in instance network info cache for port 24c642bf-d3e7-4003-97f5-0e43aca6db7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.564 2 DEBUG nova.network.neutron [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Updating instance_info_cache with network_info: [{"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.575 2 DEBUG oslo_concurrency.lockutils [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-0fde2924-0ac7-4ea2-b42d-290df3f52929" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:01:31 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 10:01:31 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/627844330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.718 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.719 2 DEBUG nova.virt.libvirt.vif [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:01:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-683406071',display_name='tempest-TestNetworkBasicOps-server-683406071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-683406071',id=9,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNC5+71zwS4peThbBj0rTs2iUGxV6KoykdELOAeuqqTcHI7GCX2cJpli9Fly77fC2uQduSSC/CbKmPPAuRDVwt9Ei0C4MDfiTMQHdYKRTolBvlRviK/zoaSsqEMl47FRQ==',key_name='tempest-TestNetworkBasicOps-614246910',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-8z022t5g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:01:28Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=0fde2924-0ac7-4ea2-b42d-290df3f52929,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.719 2 DEBUG nova.network.os_vif_util [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.720 2 DEBUG nova.network.os_vif_util [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.721 2 DEBUG nova.objects.instance [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0fde2924-0ac7-4ea2-b42d-290df3f52929 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.731 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] End _get_guest_xml xml=<domain type="kvm">
Oct 09 10:01:31 compute-1 nova_compute[162974]:   <uuid>0fde2924-0ac7-4ea2-b42d-290df3f52929</uuid>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   <name>instance-00000009</name>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   <memory>131072</memory>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   <vcpu>1</vcpu>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   <metadata>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <nova:name>tempest-TestNetworkBasicOps-server-683406071</nova:name>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <nova:creationTime>2025-10-09 10:01:30</nova:creationTime>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <nova:flavor name="m1.nano">
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <nova:memory>128</nova:memory>
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <nova:disk>1</nova:disk>
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <nova:swap>0</nova:swap>
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <nova:vcpus>1</nova:vcpus>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       </nova:flavor>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <nova:owner>
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       </nova:owner>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <nova:ports>
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <nova:port uuid="24c642bf-d3e7-4003-97f5-0e43aca6db7b">
Oct 09 10:01:31 compute-1 nova_compute[162974]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:         </nova:port>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       </nova:ports>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     </nova:instance>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   </metadata>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   <sysinfo type="smbios">
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <system>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <entry name="manufacturer">RDO</entry>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <entry name="product">OpenStack Compute</entry>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <entry name="serial">0fde2924-0ac7-4ea2-b42d-290df3f52929</entry>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <entry name="uuid">0fde2924-0ac7-4ea2-b42d-290df3f52929</entry>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <entry name="family">Virtual Machine</entry>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     </system>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   </sysinfo>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   <os>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <boot dev="hd"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <smbios mode="sysinfo"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   </os>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   <features>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <acpi/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <apic/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <vmcoreinfo/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   </features>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   <clock offset="utc">
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <timer name="hpet" present="no"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   </clock>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   <cpu mode="host-model" match="exact">
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   </cpu>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   <devices>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <disk type="network" device="disk">
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <driver type="raw" cache="none"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <source protocol="rbd" name="vms/0fde2924-0ac7-4ea2-b42d-290df3f52929_disk">
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <host name="192.168.122.100" port="6789"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <host name="192.168.122.102" port="6789"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <host name="192.168.122.101" port="6789"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       </source>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <auth username="openstack">
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       </auth>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <target dev="vda" bus="virtio"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     </disk>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <disk type="network" device="cdrom">
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <driver type="raw" cache="none"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <source protocol="rbd" name="vms/0fde2924-0ac7-4ea2-b42d-290df3f52929_disk.config">
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <host name="192.168.122.100" port="6789"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <host name="192.168.122.102" port="6789"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <host name="192.168.122.101" port="6789"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       </source>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <auth username="openstack">
Oct 09 10:01:31 compute-1 nova_compute[162974]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       </auth>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <target dev="sda" bus="sata"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     </disk>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <interface type="ethernet">
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <mac address="fa:16:3e:d9:5b:8d"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <model type="virtio"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <mtu size="1442"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <target dev="tap24c642bf-d3"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     </interface>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <serial type="pty">
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <log file="/var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/console.log" append="off"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     </serial>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <video>
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <model type="virtio"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     </video>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <input type="tablet" bus="usb"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <rng model="virtio">
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <backend model="random">/dev/urandom</backend>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     </rng>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <controller type="usb" index="0"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     <memballoon model="virtio">
Oct 09 10:01:31 compute-1 nova_compute[162974]:       <stats period="10"/>
Oct 09 10:01:31 compute-1 nova_compute[162974]:     </memballoon>
Oct 09 10:01:31 compute-1 nova_compute[162974]:   </devices>
Oct 09 10:01:31 compute-1 nova_compute[162974]: </domain>
Oct 09 10:01:31 compute-1 nova_compute[162974]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.732 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Preparing to wait for external event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.732 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.732 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.733 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.733 2 DEBUG nova.virt.libvirt.vif [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:01:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-683406071',display_name='tempest-TestNetworkBasicOps-server-683406071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-683406071',id=9,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNC5+71zwS4peThbBj0rTs2iUGxV6KoykdELOAeuqqTcHI7GCX2cJpli9Fly77fC2uQduSSC/CbKmPPAuRDVwt9Ei0C4MDfiTMQHdYKRTolBvlRviK/zoaSsqEMl47FRQ==',key_name='tempest-TestNetworkBasicOps-614246910',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-8z022t5g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:01:28Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=0fde2924-0ac7-4ea2-b42d-290df3f52929,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.733 2 DEBUG nova.network.os_vif_util [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.734 2 DEBUG nova.network.os_vif_util [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.734 2 DEBUG os_vif [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.738 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24c642bf-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.738 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24c642bf-d3, col_values=(('external_ids', {'iface-id': '24c642bf-d3e7-4003-97f5-0e43aca6db7b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:5b:8d', 'vm-uuid': '0fde2924-0ac7-4ea2-b42d-290df3f52929'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:01:31 compute-1 NetworkManager[982]: <info>  [1760004091.7404] manager: (tap24c642bf-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.746 2 INFO os_vif [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3')
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.780 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.780 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.780 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:d9:5b:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.780 2 INFO nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Using config drive
Oct 09 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.801 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.223 2 INFO nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Creating config drive at /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/disk.config
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.227 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphw96eop1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:01:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:32.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.351 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphw96eop1" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.377 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.380 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/disk.config 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:01:32 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/627844330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.488 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/disk.config 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.490 2 INFO nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Deleting local config drive /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/disk.config because it was imported into RBD.
Oct 09 10:01:32 compute-1 kernel: tap24c642bf-d3: entered promiscuous mode
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.5384] manager: (tap24c642bf-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Oct 09 10:01:32 compute-1 ovn_controller[62080]: 2025-10-09T10:01:32Z|00076|binding|INFO|Claiming lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b for this chassis.
Oct 09 10:01:32 compute-1 ovn_controller[62080]: 2025-10-09T10:01:32Z|00077|binding|INFO|24c642bf-d3e7-4003-97f5-0e43aca6db7b: Claiming fa:16:3e:d9:5b:8d 10.100.0.5
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.5478] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 09 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.5485] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.548 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:5b:8d 10.100.0.5'], port_security=['fa:16:3e:d9:5b:8d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1238411040', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0fde2924-0ac7-4ea2-b42d-290df3f52929', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1238411040', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '7', 'neutron:security_group_ids': '938aac20-7e1a-43e3-b950-0829bdd160e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=887b951a-388d-4a48-aabf-54a7b01d9585, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=24c642bf-d3e7-4003-97f5-0e43aca6db7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.549 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 24c642bf-d3e7-4003-97f5-0e43aca6db7b in datapath f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 bound to our chassis
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.549 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f1bd1d23-0de7-4b9c-b34f-27d8df0f3147
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.558 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[2158f308-8d80-4124-bacb-eebd930491b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.561 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf1bd1d23-01 in ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.563 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf1bd1d23-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.563 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6150a262-cd47-48a6-93a0-e51f6cfd750a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.564 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6704da-f87c-4ec8-9455-eed8b6f65e3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 systemd-udevd[170717]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.5776] device (tap24c642bf-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.5785] device (tap24c642bf-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.581 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[2daedd3e-4bb8-48a5-8601-8d7da5f683c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 systemd-machined[120683]: New machine qemu-5-instance-00000009.
Oct 09 10:01:32 compute-1 podman[170681]: 2025-10-09 10:01:32.609874793 +0000 UTC m=+0.103838268 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.608 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[dff045ba-9c13-40c1-bf8d-6d96a7ab451b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 systemd[1]: Started Virtual Machine qemu-5-instance-00000009.
Oct 09 10:01:32 compute-1 podman[170680]: 2025-10-09 10:01:32.623355436 +0000 UTC m=+0.117181351 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 09 10:01:32 compute-1 sudo[170730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:01:32 compute-1 sudo[170730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.641 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[d641ec51-642e-4c62-8dda-4ac838967106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 sudo[170730]: pam_unix(sudo:session): session closed for user root
Oct 09 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.6469] manager: (tapf1bd1d23-00): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.647 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[443ecede-df85-476c-b8f1-827cb3bfee2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:32 compute-1 ovn_controller[62080]: 2025-10-09T10:01:32Z|00078|binding|INFO|Setting lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b ovn-installed in OVS
Oct 09 10:01:32 compute-1 ovn_controller[62080]: 2025-10-09T10:01:32Z|00079|binding|INFO|Setting lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b up in Southbound
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.679 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[482a3b96-5426-4af0-8d6b-15a5c6c24301]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.681 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ea4b19-5209-42ed-a42e-2093eb92f3fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.6989] device (tapf1bd1d23-00): carrier: link connected
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.703 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[f153fd8f-7646-4376-88a3-ce2b7c66cde3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.719 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[bf78a294-71bc-4b13-809c-8f8d693163ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1bd1d23-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:76:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 177626, 'reachable_time': 23639, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 170776, 'error': None, 'target': 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.733 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[e9648c23-d829-4a78-9eb7-bbc78d8c340b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:762f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 177626, 'tstamp': 177626}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 170777, 'error': None, 'target': 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.749 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[48b7c4b1-cba2-47cb-b1e3-57cbd636b255]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1bd1d23-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:76:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 177626, 'reachable_time': 23639, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 170778, 'error': None, 'target': 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.784 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[7070c60c-4ab2-418f-983d-fe6ca70c8dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.813 2 DEBUG nova.compute.manager [req-27e7f96e-6152-432e-a152-b8b75df0a24d req-f189e7c6-699f-4d0b-85a0-75ac588eb4f6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.814 2 DEBUG oslo_concurrency.lockutils [req-27e7f96e-6152-432e-a152-b8b75df0a24d req-f189e7c6-699f-4d0b-85a0-75ac588eb4f6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.814 2 DEBUG oslo_concurrency.lockutils [req-27e7f96e-6152-432e-a152-b8b75df0a24d req-f189e7c6-699f-4d0b-85a0-75ac588eb4f6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.814 2 DEBUG oslo_concurrency.lockutils [req-27e7f96e-6152-432e-a152-b8b75df0a24d req-f189e7c6-699f-4d0b-85a0-75ac588eb4f6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.815 2 DEBUG nova.compute.manager [req-27e7f96e-6152-432e-a152-b8b75df0a24d req-f189e7c6-699f-4d0b-85a0-75ac588eb4f6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Processing event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.860 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[5303ff51-3967-4a0c-84d8-e5e58a970c5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.861 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1bd1d23-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.861 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.861 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1bd1d23-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.8638] manager: (tapf1bd1d23-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Oct 09 10:01:32 compute-1 kernel: tapf1bd1d23-00: entered promiscuous mode
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.866 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf1bd1d23-00, col_values=(('external_ids', {'iface-id': '8eb8f8eb-7931-447c-950a-c32841e79526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:32 compute-1 ovn_controller[62080]: 2025-10-09T10:01:32Z|00080|binding|INFO|Releasing lport 8eb8f8eb-7931-447c-950a-c32841e79526 from this chassis (sb_readonly=0)
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.881 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f1bd1d23-0de7-4b9c-b34f-27d8df0f3147.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f1bd1d23-0de7-4b9c-b34f-27d8df0f3147.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.882 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6a302e14-01a1-4e4d-96d1-d4487f2d5822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.883 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: global
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     log         /dev/log local0 debug
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     log-tag     haproxy-metadata-proxy-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     user        root
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     group       root
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     maxconn     1024
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     pidfile     /var/lib/neutron/external/pids/f1bd1d23-0de7-4b9c-b34f-27d8df0f3147.pid.haproxy
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     daemon
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: defaults
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     log global
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     mode http
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     option httplog
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     option dontlognull
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     option http-server-close
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     option forwardfor
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     retries                 3
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     timeout http-request    30s
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     timeout connect         30s
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     timeout client          32s
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     timeout server          32s
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     timeout http-keep-alive 30s
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: listen listener
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     bind 169.254.169.254:80
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:     http-request add-header X-OVN-Network-ID f1bd1d23-0de7-4b9c-b34f-27d8df0f3147
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 09 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.885 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'env', 'PROCESS_TAG=haproxy-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f1bd1d23-0de7-4b9c-b34f-27d8df0f3147.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 09 10:01:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:33.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:33 compute-1 podman[170849]: 2025-10-09 10:01:33.209444375 +0000 UTC m=+0.038960569 container create 0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:01:33 compute-1 systemd[1]: Started libpod-conmon-0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4.scope.
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.255 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.259 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004093.2581146, 0fde2924-0ac7-4ea2-b42d-290df3f52929 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.260 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] VM Started (Lifecycle Event)
Oct 09 10:01:33 compute-1 systemd[1]: Started libcrun container.
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.263 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.266 2 INFO nova.virt.libvirt.driver [-] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Instance spawned successfully.
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.267 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 09 10:01:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eaf819a371be319bcec251902a55501d9807b84351499abfedbd74b4f82185b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 10:01:33 compute-1 podman[170849]: 2025-10-09 10:01:33.279969496 +0000 UTC m=+0.109485710 container init 0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.280 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:01:33 compute-1 podman[170849]: 2025-10-09 10:01:33.285366908 +0000 UTC m=+0.114883102 container start 0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.287 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 10:01:33 compute-1 podman[170849]: 2025-10-09 10:01:33.1949705 +0000 UTC m=+0.024486715 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.292 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.292 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.293 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.293 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.293 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.294 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:01:33 compute-1 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[170861]: [NOTICE]   (170865) : New worker (170867) forked
Oct 09 10:01:33 compute-1 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[170861]: [NOTICE]   (170865) : Loading success.
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.304 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.305 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004093.2582216, 0fde2924-0ac7-4ea2-b42d-290df3f52929 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.305 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] VM Paused (Lifecycle Event)
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.328 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.333 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004093.2632186, 0fde2924-0ac7-4ea2-b42d-290df3f52929 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.334 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] VM Resumed (Lifecycle Event)
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.348 2 INFO nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Took 4.91 seconds to spawn the instance on the hypervisor.
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.348 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.350 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.355 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.380 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.399 2 INFO nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Took 5.54 seconds to build instance.
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.408 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.409 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.409 2 INFO nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.409 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:33 compute-1 ceph-mon[9795]: pgmap v854: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Oct 09 10:01:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:34.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:01:34 compute-1 nova_compute[162974]: 2025-10-09 10:01:34.873 2 DEBUG nova.compute.manager [req-046d4a83-530f-4709-906e-de4532f5e397 req-fd849488-872f-41e8-b434-2aa73a01ae6f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:01:34 compute-1 nova_compute[162974]: 2025-10-09 10:01:34.874 2 DEBUG oslo_concurrency.lockutils [req-046d4a83-530f-4709-906e-de4532f5e397 req-fd849488-872f-41e8-b434-2aa73a01ae6f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:34 compute-1 nova_compute[162974]: 2025-10-09 10:01:34.874 2 DEBUG oslo_concurrency.lockutils [req-046d4a83-530f-4709-906e-de4532f5e397 req-fd849488-872f-41e8-b434-2aa73a01ae6f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:34 compute-1 nova_compute[162974]: 2025-10-09 10:01:34.875 2 DEBUG oslo_concurrency.lockutils [req-046d4a83-530f-4709-906e-de4532f5e397 req-fd849488-872f-41e8-b434-2aa73a01ae6f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:34 compute-1 nova_compute[162974]: 2025-10-09 10:01:34.875 2 DEBUG nova.compute.manager [req-046d4a83-530f-4709-906e-de4532f5e397 req-fd849488-872f-41e8-b434-2aa73a01ae6f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] No waiting events found dispatching network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:01:34 compute-1 nova_compute[162974]: 2025-10-09 10:01:34.875 2 WARNING nova.compute.manager [req-046d4a83-530f-4709-906e-de4532f5e397 req-fd849488-872f-41e8-b434-2aa73a01ae6f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received unexpected event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b for instance with vm_state active and task_state None.
Oct 09 10:01:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:35.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.375 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.375 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.376 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.376 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.377 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.378 2 INFO nova.compute.manager [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Terminating instance
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.379 2 DEBUG nova.compute.manager [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 09 10:01:35 compute-1 kernel: tap24c642bf-d3 (unregistering): left promiscuous mode
Oct 09 10:01:35 compute-1 NetworkManager[982]: <info>  [1760004095.4019] device (tap24c642bf-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 10:01:35 compute-1 ovn_controller[62080]: 2025-10-09T10:01:35Z|00081|binding|INFO|Releasing lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b from this chassis (sb_readonly=0)
Oct 09 10:01:35 compute-1 ovn_controller[62080]: 2025-10-09T10:01:35Z|00082|binding|INFO|Setting lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b down in Southbound
Oct 09 10:01:35 compute-1 ovn_controller[62080]: 2025-10-09T10:01:35Z|00083|binding|INFO|Removing iface tap24c642bf-d3 ovn-installed in OVS
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.416 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:5b:8d 10.100.0.5'], port_security=['fa:16:3e:d9:5b:8d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1238411040', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0fde2924-0ac7-4ea2-b42d-290df3f52929', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1238411040', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '9', 'neutron:security_group_ids': '938aac20-7e1a-43e3-b950-0829bdd160e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=887b951a-388d-4a48-aabf-54a7b01d9585, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=24c642bf-d3e7-4003-97f5-0e43aca6db7b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.418 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 24c642bf-d3e7-4003-97f5-0e43aca6db7b in datapath f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 unbound from our chassis
Oct 09 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.419 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 09 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.420 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[aa82bac7-3dcf-45a1-9819-e79b14206069]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.420 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 namespace which is not needed anymore
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:35 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 09 10:01:35 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Consumed 2.712s CPU time.
Oct 09 10:01:35 compute-1 systemd-machined[120683]: Machine qemu-5-instance-00000009 terminated.
Oct 09 10:01:35 compute-1 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[170861]: [NOTICE]   (170865) : haproxy version is 2.8.14-c23fe91
Oct 09 10:01:35 compute-1 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[170861]: [NOTICE]   (170865) : path to executable is /usr/sbin/haproxy
Oct 09 10:01:35 compute-1 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[170861]: [WARNING]  (170865) : Exiting Master process...
Oct 09 10:01:35 compute-1 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[170861]: [ALERT]    (170865) : Current worker (170867) exited with code 143 (Terminated)
Oct 09 10:01:35 compute-1 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[170861]: [WARNING]  (170865) : All workers exited. Exiting... (0)
Oct 09 10:01:35 compute-1 systemd[1]: libpod-0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4.scope: Deactivated successfully.
Oct 09 10:01:35 compute-1 conmon[170861]: conmon 0e7b3cff2d349dc218fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4.scope/container/memory.events
Oct 09 10:01:35 compute-1 podman[170892]: 2025-10-09 10:01:35.51893905 +0000 UTC m=+0.033180938 container died 0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 09 10:01:35 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4-userdata-shm.mount: Deactivated successfully.
Oct 09 10:01:35 compute-1 systemd[1]: var-lib-containers-storage-overlay-7eaf819a371be319bcec251902a55501d9807b84351499abfedbd74b4f82185b-merged.mount: Deactivated successfully.
Oct 09 10:01:35 compute-1 podman[170892]: 2025-10-09 10:01:35.542259165 +0000 UTC m=+0.056501054 container cleanup 0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 09 10:01:35 compute-1 systemd[1]: libpod-conmon-0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4.scope: Deactivated successfully.
Oct 09 10:01:35 compute-1 podman[170915]: 2025-10-09 10:01:35.585508697 +0000 UTC m=+0.027619949 container remove 0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:01:35 compute-1 NetworkManager[982]: <info>  [1760004095.5919] manager: (tap24c642bf-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Oct 09 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.591 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[38d0aa70-e5ae-4c2e-9cf4-e1c77239d333]: (4, ('Thu Oct  9 10:01:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 (0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4)\n0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4\nThu Oct  9 10:01:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 (0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4)\n0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.594 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[744e3eed-81a9-4886-b80f-cab12517cf89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.596 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1bd1d23-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.602 2 INFO nova.virt.libvirt.driver [-] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Instance destroyed successfully.
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.602 2 DEBUG nova.objects.instance [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid 0fde2924-0ac7-4ea2-b42d-290df3f52929 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.611 2 DEBUG nova.virt.libvirt.vif [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T10:01:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-683406071',display_name='tempest-TestNetworkBasicOps-server-683406071',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-683406071',id=9,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNC5+71zwS4peThbBj0rTs2iUGxV6KoykdELOAeuqqTcHI7GCX2cJpli9Fly77fC2uQduSSC/CbKmPPAuRDVwt9Ei0C4MDfiTMQHdYKRTolBvlRviK/zoaSsqEMl47FRQ==',key_name='tempest-TestNetworkBasicOps-614246910',keypairs=<?>,launch_index=0,launched_at=2025-10-09T10:01:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-8z022t5g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T10:01:33Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=0fde2924-0ac7-4ea2-b42d-290df3f52929,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.611 2 DEBUG nova.network.os_vif_util [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.612 2 DEBUG nova.network.os_vif_util [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.612 2 DEBUG os_vif [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.614 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24c642bf-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:35 compute-1 kernel: tapf1bd1d23-00: left promiscuous mode
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.620 2 INFO os_vif [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3')
Oct 09 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.621 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c50eb19e-7af9-487d-ba1f-53dcbd363b15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:35 compute-1 ceph-mon[9795]: pgmap v855: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Oct 09 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.633 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf0c163-af58-4e7e-bc60-288ce20d8b02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.634 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[72b8c58c-2fb3-4701-8f44-d2238dfe7d41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.648 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[f48e390e-1bc0-497e-81bd-a7502e5e9e9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 177620, 'reachable_time': 25234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 170952, 'error': None, 'target': 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:35 compute-1 systemd[1]: run-netns-ovnmeta\x2df1bd1d23\x2d0de7\x2d4b9c\x2db34f\x2d27d8df0f3147.mount: Deactivated successfully.
Oct 09 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.652 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 09 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.652 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff0bf4a-acbf-4d42-aad2-c60b1d84513b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:01:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.789 2 INFO nova.virt.libvirt.driver [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Deleting instance files /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929_del
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.789 2 INFO nova.virt.libvirt.driver [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Deletion of /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929_del complete
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.823 2 INFO nova.compute.manager [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Took 0.44 seconds to destroy the instance on the hypervisor.
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.823 2 DEBUG oslo.service.loopingcall [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.823 2 DEBUG nova.compute.manager [-] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 09 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.823 2 DEBUG nova.network.neutron [-] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 09 10:01:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:36.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.628 2 DEBUG nova.network.neutron [-] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.639 2 INFO nova.compute.manager [-] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Took 0.82 seconds to deallocate network for instance.
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.666 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.666 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.702 2 DEBUG oslo_concurrency.processutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.930 2 DEBUG nova.compute.manager [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received event network-vif-unplugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.931 2 DEBUG oslo_concurrency.lockutils [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.931 2 DEBUG oslo_concurrency.lockutils [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.932 2 DEBUG oslo_concurrency.lockutils [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.932 2 DEBUG nova.compute.manager [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] No waiting events found dispatching network-vif-unplugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.932 2 WARNING nova.compute.manager [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received unexpected event network-vif-unplugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b for instance with vm_state deleted and task_state None.
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.932 2 DEBUG nova.compute.manager [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.932 2 DEBUG oslo_concurrency.lockutils [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.933 2 DEBUG oslo_concurrency.lockutils [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.933 2 DEBUG oslo_concurrency.lockutils [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.933 2 DEBUG nova.compute.manager [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] No waiting events found dispatching network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.933 2 WARNING nova.compute.manager [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received unexpected event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b for instance with vm_state deleted and task_state None.
Oct 09 10:01:37 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:01:37 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3366232617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:37 compute-1 nova_compute[162974]: 2025-10-09 10:01:37.037 2 DEBUG oslo_concurrency.processutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:01:37 compute-1 nova_compute[162974]: 2025-10-09 10:01:37.041 2 DEBUG nova.compute.provider_tree [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:01:37 compute-1 nova_compute[162974]: 2025-10-09 10:01:37.053 2 DEBUG nova.scheduler.client.report [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:01:37 compute-1 nova_compute[162974]: 2025-10-09 10:01:37.065 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:37.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:37 compute-1 nova_compute[162974]: 2025-10-09 10:01:37.089 2 INFO nova.scheduler.client.report [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance 0fde2924-0ac7-4ea2-b42d-290df3f52929
Oct 09 10:01:37 compute-1 nova_compute[162974]: 2025-10-09 10:01:37.155 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:37 compute-1 ceph-mon[9795]: pgmap v856: 337 pgs: 337 active+clean; 67 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 126 op/s
Oct 09 10:01:37 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3366232617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:37 compute-1 nova_compute[162974]: 2025-10-09 10:01:37.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:38.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:39.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:39 compute-1 ceph-mon[9795]: pgmap v857: 337 pgs: 337 active+clean; 67 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 09 10:01:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:01:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:40.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:01:40 compute-1 nova_compute[162974]: 2025-10-09 10:01:40.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:40 compute-1 nova_compute[162974]: 2025-10-09 10:01:40.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:40 compute-1 nova_compute[162974]: 2025-10-09 10:01:40.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:41.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:41 compute-1 ceph-mon[9795]: pgmap v858: 337 pgs: 337 active+clean; 53 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Oct 09 10:01:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:01:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:42.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:01:42 compute-1 nova_compute[162974]: 2025-10-09 10:01:42.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:43.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:43 compute-1 ceph-mon[9795]: pgmap v859: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Oct 09 10:01:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:44.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:44 compute-1 podman[170988]: 2025-10-09 10:01:44.546646903 +0000 UTC m=+0.056525139 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 09 10:01:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:45.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:45 compute-1 nova_compute[162974]: 2025-10-09 10:01:45.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:45 compute-1 ceph-mon[9795]: pgmap v860: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 09 10:01:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:46.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:47.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:47 compute-1 ceph-mon[9795]: pgmap v861: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 09 10:01:47 compute-1 nova_compute[162974]: 2025-10-09 10:01:47.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:47.877 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:01:47 compute-1 nova_compute[162974]: 2025-10-09 10:01:47.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:47.878 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 10:01:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:48.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:49.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:49 compute-1 ceph-mon[9795]: pgmap v862: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 938 B/s wr, 17 op/s
Oct 09 10:01:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:01:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:50.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:50 compute-1 nova_compute[162974]: 2025-10-09 10:01:50.601 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760004095.600311, 0fde2924-0ac7-4ea2-b42d-290df3f52929 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 10:01:50 compute-1 nova_compute[162974]: 2025-10-09 10:01:50.601 2 INFO nova.compute.manager [-] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] VM Stopped (Lifecycle Event)
Oct 09 10:01:50 compute-1 nova_compute[162974]: 2025-10-09 10:01:50.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:50 compute-1 nova_compute[162974]: 2025-10-09 10:01:50.621 2 DEBUG nova.compute.manager [None req-f19ae35d-1ca8-4c6c-9801-9a9826d3d207 - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:01:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:51.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:51 compute-1 ceph-mon[9795]: pgmap v863: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 938 B/s wr, 17 op/s
Oct 09 10:01:51 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:51.880 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:01:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:01:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:52.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:01:52 compute-1 sudo[171016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:01:52 compute-1 sudo[171016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:01:52 compute-1 sudo[171016]: pam_unix(sudo:session): session closed for user root
Oct 09 10:01:52 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/503740570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:52 compute-1 nova_compute[162974]: 2025-10-09 10:01:52.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:53.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.132 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.132 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.133 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.133 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.133 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:01:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:01:53 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/865184998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.484 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.687 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.688 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5027MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.689 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.690 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:53 compute-1 ceph-mon[9795]: pgmap v864: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 597 B/s wr, 6 op/s
Oct 09 10:01:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/865184998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.770 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.770 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.782 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:01:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:01:54 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1811252629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:54 compute-1 nova_compute[162974]: 2025-10-09 10:01:54.124 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:01:54 compute-1 nova_compute[162974]: 2025-10-09 10:01:54.127 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:01:54 compute-1 nova_compute[162974]: 2025-10-09 10:01:54.136 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:01:54 compute-1 nova_compute[162974]: 2025-10-09 10:01:54.152 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:01:54 compute-1 nova_compute[162974]: 2025-10-09 10:01:54.152 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:54.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:54 compute-1 podman[171087]: 2025-10-09 10:01:54.534267803 +0000 UTC m=+0.044764097 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 09 10:01:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1811252629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:55.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.148 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.149 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.149 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.149 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.149 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.150 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.150 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:55 compute-1 ceph-mon[9795]: pgmap v865: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:01:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1985554974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:01:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2940217175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3470198818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:01:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:56 compute-1 nova_compute[162974]: 2025-10-09 10:01:56.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:56 compute-1 nova_compute[162974]: 2025-10-09 10:01:56.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:01:56 compute-1 nova_compute[162974]: 2025-10-09 10:01:56.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:01:56 compute-1 nova_compute[162974]: 2025-10-09 10:01:56.126 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:01:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:56.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2633646493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3931842824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:57.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:57 compute-1 nova_compute[162974]: 2025-10-09 10:01:57.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:57 compute-1 ceph-mon[9795]: pgmap v866: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 10:01:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/224353728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:57 compute-1 nova_compute[162974]: 2025-10-09 10:01:57.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:58.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:01:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:01:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:59.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:01:59 compute-1 ceph-mon[9795]: pgmap v867: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 10:02:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:00.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:00 compute-1 nova_compute[162974]: 2025-10-09 10:02:00.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:01.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:01 compute-1 ceph-mon[9795]: pgmap v868: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 376 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Oct 09 10:02:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:02.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:02 compute-1 nova_compute[162974]: 2025-10-09 10:02:02.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:03.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:03 compute-1 podman[171109]: 2025-10-09 10:02:03.54098963 +0000 UTC m=+0.045068912 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:02:03 compute-1 podman[171110]: 2025-10-09 10:02:03.569853542 +0000 UTC m=+0.070667218 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd)
Oct 09 10:02:03 compute-1 ceph-mon[9795]: pgmap v869: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 10:02:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:04.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:02:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:05.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:05 compute-1 nova_compute[162974]: 2025-10-09 10:02:05.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:05 compute-1 ceph-mon[9795]: pgmap v870: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 10:02:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:06.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:07.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:07 compute-1 nova_compute[162974]: 2025-10-09 10:02:07.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:07 compute-1 ceph-mon[9795]: pgmap v871: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 10:02:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:08.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:09.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:09 compute-1 ceph-mon[9795]: pgmap v872: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 09 10:02:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:10.040 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:02:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:10.041 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:02:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:10.041 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:02:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:10.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:10 compute-1 nova_compute[162974]: 2025-10-09 10:02:10.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:11.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:11 compute-1 ceph-mon[9795]: pgmap v873: 337 pgs: 337 active+clean; 89 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 148 KiB/s wr, 93 op/s
Oct 09 10:02:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:12.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:12 compute-1 sudo[171147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:02:12 compute-1 sudo[171147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:02:12 compute-1 sudo[171147]: pam_unix(sudo:session): session closed for user root
Oct 09 10:02:12 compute-1 nova_compute[162974]: 2025-10-09 10:02:12.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/4098107868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:02:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/4098107868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:02:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:13.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:13 compute-1 ceph-mon[9795]: pgmap v874: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 09 10:02:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:02:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:14.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:02:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:15.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:15 compute-1 podman[171174]: 2025-10-09 10:02:15.569028163 +0000 UTC m=+0.075761438 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true)
Oct 09 10:02:15 compute-1 nova_compute[162974]: 2025-10-09 10:02:15.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:15 compute-1 ceph-mon[9795]: pgmap v875: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 09 10:02:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:16.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:02:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:17.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:02:17 compute-1 nova_compute[162974]: 2025-10-09 10:02:17.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:17 compute-1 ceph-mon[9795]: pgmap v876: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 10:02:17 compute-1 ovn_controller[62080]: 2025-10-09T10:02:17Z|00084|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 09 10:02:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:18.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:02:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:19.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:02:19 compute-1 ceph-mon[9795]: pgmap v877: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 10:02:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:02:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:20.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:20 compute-1 nova_compute[162974]: 2025-10-09 10:02:20.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:02:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:21.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:02:21 compute-1 ceph-mon[9795]: pgmap v878: 337 pgs: 337 active+clean; 114 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 09 10:02:21 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1295465010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:22.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:22 compute-1 nova_compute[162974]: 2025-10-09 10:02:22.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:23.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:23 compute-1 ceph-mon[9795]: pgmap v879: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 222 KiB/s rd, 2.0 MiB/s wr, 74 op/s
Oct 09 10:02:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:24.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:25.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:25 compute-1 podman[171202]: 2025-10-09 10:02:25.534865046 +0000 UTC m=+0.038421403 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 10:02:25 compute-1 nova_compute[162974]: 2025-10-09 10:02:25.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:25 compute-1 ceph-mon[9795]: pgmap v880: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 29 op/s
Oct 09 10:02:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:26.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:27.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:27 compute-1 nova_compute[162974]: 2025-10-09 10:02:27.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:27 compute-1 ceph-mon[9795]: pgmap v881: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 30 op/s
Oct 09 10:02:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:28.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:29.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:29 compute-1 ceph-mon[9795]: pgmap v882: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Oct 09 10:02:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:30.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:30 compute-1 nova_compute[162974]: 2025-10-09 10:02:30.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:30 compute-1 sudo[171221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:02:30 compute-1 sudo[171221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:02:30 compute-1 sudo[171221]: pam_unix(sudo:session): session closed for user root
Oct 09 10:02:30 compute-1 sudo[171246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:02:30 compute-1 sudo[171246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:02:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:31.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:31 compute-1 sudo[171246]: pam_unix(sudo:session): session closed for user root
Oct 09 10:02:31 compute-1 ceph-mon[9795]: pgmap v883: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Oct 09 10:02:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:32 compute-1 nova_compute[162974]: 2025-10-09 10:02:32.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:32 compute-1 sudo[171300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:02:32 compute-1 sudo[171300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:02:32 compute-1 sudo[171300]: pam_unix(sudo:session): session closed for user root
Oct 09 10:02:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:33.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:33 compute-1 ceph-mon[9795]: pgmap v884: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 6.2 KiB/s wr, 10 op/s
Oct 09 10:02:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:02:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:02:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:34.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:34 compute-1 podman[171326]: 2025-10-09 10:02:34.544323614 +0000 UTC m=+0.047408786 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 09 10:02:34 compute-1 podman[171327]: 2025-10-09 10:02:34.544398565 +0000 UTC m=+0.046549267 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 09 10:02:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:02:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:02:34 compute-1 ceph-mon[9795]: pgmap v885: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:02:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:02:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:02:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:02:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:02:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:02:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.057 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.058 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.068 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.122 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.123 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.127 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.128 2 INFO nova.compute.claims [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Claim successful on node compute-1.ctlplane.example.com
Oct 09 10:02:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:35.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.191 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:02:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:02:35 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4131332810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.544 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.548 2 DEBUG nova.compute.provider_tree [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.561 2 DEBUG nova.scheduler.client.report [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.575 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.576 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.611 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.611 2 DEBUG nova.network.neutron [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.638 2 INFO nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.652 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.716 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.716 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.717 2 INFO nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Creating image(s)
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.737 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.761 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:02:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.785 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.789 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.849 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.850 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.851 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.851 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.872 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.875 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:02:35 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/4131332810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.019 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.070 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 09 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.131 2 DEBUG nova.objects.instance [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid 21bbcca2-5cec-4324-9af4-6d2090b6b113 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.138 2 DEBUG nova.policy [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 09 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.143 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 09 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.143 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Ensure instance console log exists: /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 09 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.144 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.144 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.144 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:02:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:36.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.604 2 DEBUG nova.network.neutron [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Successfully created port: 52ec2db5-2e22-45a7-92ee-f0e360776c10 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 09 10:02:36 compute-1 ceph-mon[9795]: pgmap v886: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 09 10:02:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:37.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:37 compute-1 nova_compute[162974]: 2025-10-09 10:02:37.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:37 compute-1 sudo[171550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:02:37 compute-1 sudo[171550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:02:37 compute-1 sudo[171550]: pam_unix(sudo:session): session closed for user root
Oct 09 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.158 2 DEBUG nova.network.neutron [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Successfully updated port: 52ec2db5-2e22-45a7-92ee-f0e360776c10 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 09 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.169 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.169 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.169 2 DEBUG nova.network.neutron [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 09 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.235 2 DEBUG nova.compute.manager [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.235 2 DEBUG nova.compute.manager [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing instance network info cache due to event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.235 2 DEBUG oslo_concurrency.lockutils [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.285 2 DEBUG nova.network.neutron [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 09 10:02:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:38.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:02:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:02:38 compute-1 ceph-mon[9795]: pgmap v887: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.138 2 DEBUG nova.network.neutron [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:02:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:39.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.156 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.156 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Instance network_info: |[{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.157 2 DEBUG oslo_concurrency.lockutils [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.157 2 DEBUG nova.network.neutron [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.160 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Start _get_guest_xml network_info=[{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.163 2 WARNING nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.167 2 DEBUG nova.virt.libvirt.host [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.168 2 DEBUG nova.virt.libvirt.host [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.173 2 DEBUG nova.virt.libvirt.host [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.173 2 DEBUG nova.virt.libvirt.host [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.173 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.174 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.174 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.174 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.174 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.174 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.175 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.175 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.175 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.175 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.175 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.175 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.178 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:02:39 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 10:02:39 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2502987184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.559 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.579 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.585 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:02:39 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2502987184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:02:39 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 10:02:39 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1572329529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.958 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.960 2 DEBUG nova.virt.libvirt.vif [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:02:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-670315443',display_name='tempest-TestNetworkBasicOps-server-670315443',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-670315443',id=11,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMC7xAI/YYK+cbn0PHRoxiiahdIQdKccwfERXZSRnLEKnS9i37SYurywRQCZNQPHgGjlY2G9Hgc0qmCz+iCo4fLyxnirlBRGL3WmP1CDMLNiBavqZTIOedAyGcrchrWbVA==',key_name='tempest-TestNetworkBasicOps-462284814',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-wq2l0ql1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:02:35Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=21bbcca2-5cec-4324-9af4-6d2090b6b113,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.960 2 DEBUG nova.network.os_vif_util [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.961 2 DEBUG nova.network.os_vif_util [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:24:81,bridge_name='br-int',has_traffic_filtering=True,id=52ec2db5-2e22-45a7-92ee-f0e360776c10,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52ec2db5-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.961 2 DEBUG nova.objects.instance [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid 21bbcca2-5cec-4324-9af4-6d2090b6b113 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.981 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] End _get_guest_xml xml=<domain type="kvm">
Oct 09 10:02:39 compute-1 nova_compute[162974]:   <uuid>21bbcca2-5cec-4324-9af4-6d2090b6b113</uuid>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   <name>instance-0000000b</name>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   <memory>131072</memory>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   <vcpu>1</vcpu>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   <metadata>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <nova:name>tempest-TestNetworkBasicOps-server-670315443</nova:name>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <nova:creationTime>2025-10-09 10:02:39</nova:creationTime>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <nova:flavor name="m1.nano">
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <nova:memory>128</nova:memory>
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <nova:disk>1</nova:disk>
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <nova:swap>0</nova:swap>
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <nova:vcpus>1</nova:vcpus>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       </nova:flavor>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <nova:owner>
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       </nova:owner>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <nova:ports>
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <nova:port uuid="52ec2db5-2e22-45a7-92ee-f0e360776c10">
Oct 09 10:02:39 compute-1 nova_compute[162974]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:         </nova:port>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       </nova:ports>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     </nova:instance>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   </metadata>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   <sysinfo type="smbios">
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <system>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <entry name="manufacturer">RDO</entry>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <entry name="product">OpenStack Compute</entry>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <entry name="serial">21bbcca2-5cec-4324-9af4-6d2090b6b113</entry>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <entry name="uuid">21bbcca2-5cec-4324-9af4-6d2090b6b113</entry>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <entry name="family">Virtual Machine</entry>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     </system>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   </sysinfo>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   <os>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <boot dev="hd"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <smbios mode="sysinfo"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   </os>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   <features>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <acpi/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <apic/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <vmcoreinfo/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   </features>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   <clock offset="utc">
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <timer name="hpet" present="no"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   </clock>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   <cpu mode="host-model" match="exact">
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   </cpu>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   <devices>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <disk type="network" device="disk">
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <driver type="raw" cache="none"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <source protocol="rbd" name="vms/21bbcca2-5cec-4324-9af4-6d2090b6b113_disk">
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <host name="192.168.122.100" port="6789"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <host name="192.168.122.102" port="6789"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <host name="192.168.122.101" port="6789"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       </source>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <auth username="openstack">
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       </auth>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <target dev="vda" bus="virtio"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     </disk>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <disk type="network" device="cdrom">
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <driver type="raw" cache="none"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <source protocol="rbd" name="vms/21bbcca2-5cec-4324-9af4-6d2090b6b113_disk.config">
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <host name="192.168.122.100" port="6789"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <host name="192.168.122.102" port="6789"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <host name="192.168.122.101" port="6789"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       </source>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <auth username="openstack">
Oct 09 10:02:39 compute-1 nova_compute[162974]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       </auth>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <target dev="sda" bus="sata"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     </disk>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <interface type="ethernet">
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <mac address="fa:16:3e:19:24:81"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <model type="virtio"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <mtu size="1442"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <target dev="tap52ec2db5-2e"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     </interface>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <serial type="pty">
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <log file="/var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/console.log" append="off"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     </serial>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <video>
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <model type="virtio"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     </video>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <input type="tablet" bus="usb"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <rng model="virtio">
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <backend model="random">/dev/urandom</backend>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     </rng>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <controller type="usb" index="0"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     <memballoon model="virtio">
Oct 09 10:02:39 compute-1 nova_compute[162974]:       <stats period="10"/>
Oct 09 10:02:39 compute-1 nova_compute[162974]:     </memballoon>
Oct 09 10:02:39 compute-1 nova_compute[162974]:   </devices>
Oct 09 10:02:39 compute-1 nova_compute[162974]: </domain>
Oct 09 10:02:39 compute-1 nova_compute[162974]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.982 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Preparing to wait for external event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.982 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.982 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.983 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.983 2 DEBUG nova.virt.libvirt.vif [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:02:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-670315443',display_name='tempest-TestNetworkBasicOps-server-670315443',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-670315443',id=11,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMC7xAI/YYK+cbn0PHRoxiiahdIQdKccwfERXZSRnLEKnS9i37SYurywRQCZNQPHgGjlY2G9Hgc0qmCz+iCo4fLyxnirlBRGL3WmP1CDMLNiBavqZTIOedAyGcrchrWbVA==',key_name='tempest-TestNetworkBasicOps-462284814',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-wq2l0ql1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:02:35Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=21bbcca2-5cec-4324-9af4-6d2090b6b113,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.984 2 DEBUG nova.network.os_vif_util [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.984 2 DEBUG nova.network.os_vif_util [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:24:81,bridge_name='br-int',has_traffic_filtering=True,id=52ec2db5-2e22-45a7-92ee-f0e360776c10,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52ec2db5-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.985 2 DEBUG os_vif [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:24:81,bridge_name='br-int',has_traffic_filtering=True,id=52ec2db5-2e22-45a7-92ee-f0e360776c10,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52ec2db5-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.986 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.986 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.991 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52ec2db5-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.992 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52ec2db5-2e, col_values=(('external_ids', {'iface-id': '52ec2db5-2e22-45a7-92ee-f0e360776c10', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:24:81', 'vm-uuid': '21bbcca2-5cec-4324-9af4-6d2090b6b113'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:39 compute-1 NetworkManager[982]: <info>  [1760004159.9946] manager: (tap52ec2db5-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct 09 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.002 2 INFO os_vif [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:24:81,bridge_name='br-int',has_traffic_filtering=True,id=52ec2db5-2e22-45a7-92ee-f0e360776c10,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52ec2db5-2e')
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.035 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.036 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.036 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:19:24:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.036 2 INFO nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Using config drive
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.055 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.164 2 DEBUG nova.network.neutron [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updated VIF entry in instance network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.165 2 DEBUG nova.network.neutron [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.181 2 DEBUG oslo_concurrency.lockutils [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:02:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:02:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:40.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.367 2 INFO nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Creating config drive at /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/disk.config
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.371 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9wi_r0wp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.499 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9wi_r0wp" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.525 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.529 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/disk.config 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.639 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/disk.config 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.640 2 INFO nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Deleting local config drive /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/disk.config because it was imported into RBD.
Oct 09 10:02:40 compute-1 kernel: tap52ec2db5-2e: entered promiscuous mode
Oct 09 10:02:40 compute-1 NetworkManager[982]: <info>  [1760004160.6862] manager: (tap52ec2db5-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Oct 09 10:02:40 compute-1 ovn_controller[62080]: 2025-10-09T10:02:40Z|00085|binding|INFO|Claiming lport 52ec2db5-2e22-45a7-92ee-f0e360776c10 for this chassis.
Oct 09 10:02:40 compute-1 ovn_controller[62080]: 2025-10-09T10:02:40Z|00086|binding|INFO|52ec2db5-2e22-45a7-92ee-f0e360776c10: Claiming fa:16:3e:19:24:81 10.100.0.8
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.698 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:24:81 10.100.0.8'], port_security=['fa:16:3e:19:24:81 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '21bbcca2-5cec-4324-9af4-6d2090b6b113', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e36da7d-913d-4101-a7c2-e1698abf35be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9cb722ba-1853-4a45-bd00-f5690460099e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e49a2e1f-bde0-4698-a31c-366cd4b00fe5, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=52ec2db5-2e22-45a7-92ee-f0e360776c10) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.699 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 52ec2db5-2e22-45a7-92ee-f0e360776c10 in datapath 7e36da7d-913d-4101-a7c2-e1698abf35be bound to our chassis
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.700 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e36da7d-913d-4101-a7c2-e1698abf35be
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.711 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[72a6ca2d-b74f-447a-b11d-942b3100ed7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.712 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7e36da7d-91 in ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.713 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7e36da7d-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.714 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[4078a9ae-488a-4de9-ba48-82aa0ed3c5c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.714 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[31de3aac-71e9-4cdd-b928-f90bf67c0748]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 systemd-udevd[171711]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 10:02:40 compute-1 systemd-machined[120683]: New machine qemu-6-instance-0000000b.
Oct 09 10:02:40 compute-1 systemd[1]: Started Virtual Machine qemu-6-instance-0000000b.
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.726 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[485bfd74-9c90-4643-b2a2-aeba376c0639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 NetworkManager[982]: <info>  [1760004160.7311] device (tap52ec2db5-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 10:02:40 compute-1 NetworkManager[982]: <info>  [1760004160.7317] device (tap52ec2db5-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 10:02:40 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1572329529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:02:40 compute-1 ceph-mon[9795]: pgmap v888: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.748 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[88c361d3-08b1-4b86-b0fd-d907b18b0c50]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:40 compute-1 ovn_controller[62080]: 2025-10-09T10:02:40Z|00087|binding|INFO|Setting lport 52ec2db5-2e22-45a7-92ee-f0e360776c10 ovn-installed in OVS
Oct 09 10:02:40 compute-1 ovn_controller[62080]: 2025-10-09T10:02:40Z|00088|binding|INFO|Setting lport 52ec2db5-2e22-45a7-92ee-f0e360776c10 up in Southbound
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.774 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[ced34f8a-932e-41c1-87e9-4adf3476dccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 NetworkManager[982]: <info>  [1760004160.7797] manager: (tap7e36da7d-90): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.779 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[20620a93-f21e-4225-9297-fae8ef98145c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 systemd-udevd[171714]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.799 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[f79b566a-3589-47c4-b9c4-525ad79c46fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.801 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[39e632de-055a-4044-8bf7-1478780bd490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 NetworkManager[982]: <info>  [1760004160.8150] device (tap7e36da7d-90): carrier: link connected
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.818 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[753562fc-4596-4c5c-bc7f-3b0c3b856a3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.832 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4a77ae-f335-48ae-9ece-6831863974a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e36da7d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:a3:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 184438, 'reachable_time': 28208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 171735, 'error': None, 'target': 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.847 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[1c569013-2f48-4449-b55b-303c33b6a8ee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed9:a343'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 184438, 'tstamp': 184438}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 171736, 'error': None, 'target': 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.856 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[a95cd2b2-b192-486d-a443-c1369aaf2146]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e36da7d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:a3:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 184438, 'reachable_time': 28208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 171737, 'error': None, 'target': 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.875 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[871d9e1d-013c-43b2-850d-0796935cd216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.908 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[052f7b86-3d3e-410b-af33-2fcdd679e306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.909 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e36da7d-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.910 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.911 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e36da7d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:02:40 compute-1 kernel: tap7e36da7d-90: entered promiscuous mode
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:40 compute-1 NetworkManager[982]: <info>  [1760004160.9142] manager: (tap7e36da7d-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.916 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e36da7d-90, col_values=(('external_ids', {'iface-id': 'e74168ad-5871-4088-b5cd-db351251a793'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:40 compute-1 ovn_controller[62080]: 2025-10-09T10:02:40Z|00089|binding|INFO|Releasing lport e74168ad-5871-4088-b5cd-db351251a793 from this chassis (sb_readonly=0)
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.921 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7e36da7d-913d-4101-a7c2-e1698abf35be.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7e36da7d-913d-4101-a7c2-e1698abf35be.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.921 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[fab09887-f639-45ba-b217-09f6c9d75175]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.922 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: global
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     log         /dev/log local0 debug
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     log-tag     haproxy-metadata-proxy-7e36da7d-913d-4101-a7c2-e1698abf35be
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     user        root
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     group       root
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     maxconn     1024
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     pidfile     /var/lib/neutron/external/pids/7e36da7d-913d-4101-a7c2-e1698abf35be.pid.haproxy
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     daemon
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: defaults
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     log global
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     mode http
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     option httplog
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     option dontlognull
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     option http-server-close
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     option forwardfor
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     retries                 3
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     timeout http-request    30s
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     timeout connect         30s
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     timeout client          32s
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     timeout server          32s
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     timeout http-keep-alive 30s
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: listen listener
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     bind 169.254.169.254:80
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:     http-request add-header X-OVN-Network-ID 7e36da7d-913d-4101-a7c2-e1698abf35be
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 09 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.923 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'env', 'PROCESS_TAG=haproxy-7e36da7d-913d-4101-a7c2-e1698abf35be', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7e36da7d-913d-4101-a7c2-e1698abf35be.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.957 2 DEBUG nova.compute.manager [req-b98d5d52-6832-4147-8438-9a67e468898f req-75133154-314f-4b22-ba63-6b714c9a9390 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.958 2 DEBUG oslo_concurrency.lockutils [req-b98d5d52-6832-4147-8438-9a67e468898f req-75133154-314f-4b22-ba63-6b714c9a9390 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.958 2 DEBUG oslo_concurrency.lockutils [req-b98d5d52-6832-4147-8438-9a67e468898f req-75133154-314f-4b22-ba63-6b714c9a9390 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.958 2 DEBUG oslo_concurrency.lockutils [req-b98d5d52-6832-4147-8438-9a67e468898f req-75133154-314f-4b22-ba63-6b714c9a9390 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.959 2 DEBUG nova.compute.manager [req-b98d5d52-6832-4147-8438-9a67e468898f req-75133154-314f-4b22-ba63-6b714c9a9390 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Processing event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 09 10:02:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:41.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:41 compute-1 podman[171808]: 2025-10-09 10:02:41.236206895 +0000 UTC m=+0.043427931 container create 1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 09 10:02:41 compute-1 systemd[1]: Started libpod-conmon-1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2.scope.
Oct 09 10:02:41 compute-1 systemd[1]: Started libcrun container.
Oct 09 10:02:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c62e9def9ad6686c7da16f6d7f5c0040e366e3403846fd843dae5358e993db42/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 10:02:41 compute-1 podman[171808]: 2025-10-09 10:02:41.214399343 +0000 UTC m=+0.021620390 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 10:02:41 compute-1 podman[171808]: 2025-10-09 10:02:41.310808504 +0000 UTC m=+0.118029541 container init 1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 09 10:02:41 compute-1 podman[171808]: 2025-10-09 10:02:41.315175457 +0000 UTC m=+0.122396495 container start 1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:02:41 compute-1 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[171819]: [NOTICE]   (171823) : New worker (171825) forked
Oct 09 10:02:41 compute-1 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[171819]: [NOTICE]   (171823) : Loading success.
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.390 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004161.389727, 21bbcca2-5cec-4324-9af4-6d2090b6b113 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.390 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] VM Started (Lifecycle Event)
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.392 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.394 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.396 2 INFO nova.virt.libvirt.driver [-] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Instance spawned successfully.
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.396 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.411 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.414 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.420 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.420 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.420 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.421 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.421 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.421 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.427 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.427 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004161.3898058, 21bbcca2-5cec-4324-9af4-6d2090b6b113 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.427 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] VM Paused (Lifecycle Event)
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.443 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.444 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004161.3939636, 21bbcca2-5cec-4324-9af4-6d2090b6b113 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.445 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] VM Resumed (Lifecycle Event)
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.458 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.459 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.465 2 INFO nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Took 5.75 seconds to spawn the instance on the hypervisor.
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.465 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.470 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.505 2 INFO nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Took 6.41 seconds to build instance.
Oct 09 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.515 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:02:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:42.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:42 compute-1 nova_compute[162974]: 2025-10-09 10:02:42.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.014 2 DEBUG nova.compute.manager [req-bf600a7e-2ced-4646-aeae-38f1303f0eec req-3898323e-d409-4b57-a877-5d51f6d33c00 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.015 2 DEBUG oslo_concurrency.lockutils [req-bf600a7e-2ced-4646-aeae-38f1303f0eec req-3898323e-d409-4b57-a877-5d51f6d33c00 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.015 2 DEBUG oslo_concurrency.lockutils [req-bf600a7e-2ced-4646-aeae-38f1303f0eec req-3898323e-d409-4b57-a877-5d51f6d33c00 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.015 2 DEBUG oslo_concurrency.lockutils [req-bf600a7e-2ced-4646-aeae-38f1303f0eec req-3898323e-d409-4b57-a877-5d51f6d33c00 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.015 2 DEBUG nova.compute.manager [req-bf600a7e-2ced-4646-aeae-38f1303f0eec req-3898323e-d409-4b57-a877-5d51f6d33c00 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] No waiting events found dispatching network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.015 2 WARNING nova.compute.manager [req-bf600a7e-2ced-4646-aeae-38f1303f0eec req-3898323e-d409-4b57-a877-5d51f6d33c00 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received unexpected event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 for instance with vm_state active and task_state None.
Oct 09 10:02:43 compute-1 ceph-mon[9795]: pgmap v889: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.9 MiB/s wr, 33 op/s
Oct 09 10:02:43 compute-1 ovn_controller[62080]: 2025-10-09T10:02:43Z|00090|binding|INFO|Releasing lport e74168ad-5871-4088-b5cd-db351251a793 from this chassis (sb_readonly=0)
Oct 09 10:02:43 compute-1 NetworkManager[982]: <info>  [1760004163.1321] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct 09 10:02:43 compute-1 NetworkManager[982]: <info>  [1760004163.1328] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct 09 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:43.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:43 compute-1 ovn_controller[62080]: 2025-10-09T10:02:43Z|00091|binding|INFO|Releasing lport e74168ad-5871-4088-b5cd-db351251a793 from this chassis (sb_readonly=0)
Oct 09 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.315 2 DEBUG nova.compute.manager [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.315 2 DEBUG nova.compute.manager [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing instance network info cache due to event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.316 2 DEBUG oslo_concurrency.lockutils [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.316 2 DEBUG oslo_concurrency.lockutils [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.316 2 DEBUG nova.network.neutron [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 10:02:44 compute-1 nova_compute[162974]: 2025-10-09 10:02:44.141 2 DEBUG nova.network.neutron [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updated VIF entry in instance network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 10:02:44 compute-1 nova_compute[162974]: 2025-10-09 10:02:44.141 2 DEBUG nova.network.neutron [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:02:44 compute-1 nova_compute[162974]: 2025-10-09 10:02:44.151 2 DEBUG oslo_concurrency.lockutils [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:02:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:44.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:44 compute-1 nova_compute[162974]: 2025-10-09 10:02:44.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:45 compute-1 ceph-mon[9795]: pgmap v890: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.9 MiB/s wr, 33 op/s
Oct 09 10:02:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:45.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:46.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:46 compute-1 podman[171834]: 2025-10-09 10:02:46.548566766 +0000 UTC m=+0.059610189 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 10:02:47 compute-1 ceph-mon[9795]: pgmap v891: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 10:02:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:47.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:47 compute-1 nova_compute[162974]: 2025-10-09 10:02:47.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:48.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:49 compute-1 ceph-mon[9795]: pgmap v892: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 10:02:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3602145734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:49.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:49 compute-1 nova_compute[162974]: 2025-10-09 10:02:49.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:02:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:50.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:51 compute-1 ceph-mon[9795]: pgmap v893: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 10:02:51 compute-1 nova_compute[162974]: 2025-10-09 10:02:51.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:51.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:52 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2931780533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:02:52 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1952398429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:02:52 compute-1 ovn_controller[62080]: 2025-10-09T10:02:52Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:24:81 10.100.0.8
Oct 09 10:02:52 compute-1 ovn_controller[62080]: 2025-10-09T10:02:52Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:24:81 10.100.0.8
Oct 09 10:02:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:52.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:52 compute-1 nova_compute[162974]: 2025-10-09 10:02:52.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:52 compute-1 sudo[171860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:02:52 compute-1 sudo[171860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:02:52 compute-1 sudo[171860]: pam_unix(sudo:session): session closed for user root
Oct 09 10:02:53 compute-1 ceph-mon[9795]: pgmap v894: 337 pgs: 337 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 130 op/s
Oct 09 10:02:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:53.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:54 compute-1 nova_compute[162974]: 2025-10-09 10:02:54.123 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:54.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:55 compute-1 ceph-mon[9795]: pgmap v895: 337 pgs: 337 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.136 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.136 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.137 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.137 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.137 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:02:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:55.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:02:55 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2548672515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.523 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.573 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.574 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 09 10:02:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.803 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.806 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4852MB free_disk=59.94662857055664GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.806 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.807 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.861 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Instance 21bbcca2-5cec-4324-9af4-6d2090b6b113 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.861 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.861 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.886 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:02:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2548672515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1152412393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:56.156 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:02:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:56.157 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 10:02:56 compute-1 nova_compute[162974]: 2025-10-09 10:02:56.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:02:56 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/371860424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:56 compute-1 nova_compute[162974]: 2025-10-09 10:02:56.247 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:02:56 compute-1 nova_compute[162974]: 2025-10-09 10:02:56.250 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:02:56 compute-1 nova_compute[162974]: 2025-10-09 10:02:56.265 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:02:56 compute-1 nova_compute[162974]: 2025-10-09 10:02:56.280 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:02:56 compute-1 nova_compute[162974]: 2025-10-09 10:02:56.280 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:02:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:56.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:56 compute-1 podman[171932]: 2025-10-09 10:02:56.532401714 +0000 UTC m=+0.044257765 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:02:57 compute-1 ceph-mon[9795]: pgmap v896: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 235 op/s
Oct 09 10:02:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/371860424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2725321656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2491592431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:57.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.282 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.282 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.282 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.461 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.462 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquired lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.462 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 09 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.462 2 DEBUG nova.objects.instance [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 21bbcca2-5cec-4324-9af4-6d2090b6b113 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:58 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2965295672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.239 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.253 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Releasing lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.254 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 09 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.254 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.254 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.254 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.254 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.255 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:58.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:59 compute-1 ceph-mon[9795]: pgmap v897: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 09 10:02:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:02:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:59.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:00 compute-1 nova_compute[162974]: 2025-10-09 10:03:00.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:00.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:01 compute-1 ceph-mon[9795]: pgmap v898: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 09 10:03:01 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:01.159 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:03:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:01.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:02.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:02 compute-1 nova_compute[162974]: 2025-10-09 10:03:02.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:03 compute-1 ceph-mon[9795]: pgmap v899: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 09 10:03:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:03.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:04.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:05 compute-1 nova_compute[162974]: 2025-10-09 10:03:05.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:05 compute-1 ceph-mon[9795]: pgmap v900: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct 09 10:03:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:03:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:05.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:05 compute-1 podman[171954]: 2025-10-09 10:03:05.535440067 +0000 UTC m=+0.041238624 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 10:03:05 compute-1 podman[171953]: 2025-10-09 10:03:05.557267386 +0000 UTC m=+0.063502276 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 09 10:03:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:06.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:07 compute-1 ceph-mon[9795]: pgmap v901: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 199 op/s
Oct 09 10:03:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:07.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:07 compute-1 nova_compute[162974]: 2025-10-09 10:03:07.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:08.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:09 compute-1 ceph-mon[9795]: pgmap v902: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 10:03:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:09.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:10 compute-1 nova_compute[162974]: 2025-10-09 10:03:10.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:10.042 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:10.042 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:10.043 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:10.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:11 compute-1 ceph-mon[9795]: pgmap v903: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 10:03:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:11.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/3114795455' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:03:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/3114795455' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:03:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:12.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:12 compute-1 nova_compute[162974]: 2025-10-09 10:03:12.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:12 compute-1 sudo[171992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:03:12 compute-1 sudo[171992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:03:12 compute-1 sudo[171992]: pam_unix(sudo:session): session closed for user root
Oct 09 10:03:13 compute-1 ceph-mon[9795]: pgmap v904: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.2 MiB/s wr, 63 op/s
Oct 09 10:03:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:13.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:14 compute-1 nova_compute[162974]: 2025-10-09 10:03:14.307 2 INFO nova.compute.manager [None req-51fa9074-1180-4247-b844-3f0c12fcbe0e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Get console output
Oct 09 10:03:14 compute-1 nova_compute[162974]: 2025-10-09 10:03:14.318 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 09 10:03:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:14.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:15 compute-1 nova_compute[162974]: 2025-10-09 10:03:15.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:15 compute-1 nova_compute[162974]: 2025-10-09 10:03:15.117 2 DEBUG nova.compute.manager [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:03:15 compute-1 nova_compute[162974]: 2025-10-09 10:03:15.118 2 DEBUG nova.compute.manager [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing instance network info cache due to event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 10:03:15 compute-1 nova_compute[162974]: 2025-10-09 10:03:15.118 2 DEBUG oslo_concurrency.lockutils [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:03:15 compute-1 nova_compute[162974]: 2025-10-09 10:03:15.118 2 DEBUG oslo_concurrency.lockutils [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:03:15 compute-1 nova_compute[162974]: 2025-10-09 10:03:15.118 2 DEBUG nova.network.neutron [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 10:03:15 compute-1 ceph-mon[9795]: pgmap v905: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 10:03:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:15.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:16 compute-1 nova_compute[162974]: 2025-10-09 10:03:16.120 2 INFO nova.compute.manager [None req-383ab36a-f7e6-4d44-9386-b58203a8332c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Get console output
Oct 09 10:03:16 compute-1 nova_compute[162974]: 2025-10-09 10:03:16.125 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 09 10:03:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:16.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:16 compute-1 nova_compute[162974]: 2025-10-09 10:03:16.440 2 DEBUG nova.network.neutron [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updated VIF entry in instance network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 10:03:16 compute-1 nova_compute[162974]: 2025-10-09 10:03:16.441 2 DEBUG nova.network.neutron [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:03:16 compute-1 nova_compute[162974]: 2025-10-09 10:03:16.453 2 DEBUG oslo_concurrency.lockutils [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:03:17 compute-1 ceph-mon[9795]: pgmap v906: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 09 10:03:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:17.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.211 2 DEBUG nova.compute.manager [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-unplugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.212 2 DEBUG oslo_concurrency.lockutils [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.212 2 DEBUG oslo_concurrency.lockutils [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.212 2 DEBUG oslo_concurrency.lockutils [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.212 2 DEBUG nova.compute.manager [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] No waiting events found dispatching network-vif-unplugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.213 2 WARNING nova.compute.manager [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received unexpected event network-vif-unplugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 for instance with vm_state active and task_state None.
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.213 2 DEBUG nova.compute.manager [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.213 2 DEBUG oslo_concurrency.lockutils [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.214 2 DEBUG oslo_concurrency.lockutils [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.214 2 DEBUG oslo_concurrency.lockutils [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.214 2 DEBUG nova.compute.manager [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] No waiting events found dispatching network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.214 2 WARNING nova.compute.manager [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received unexpected event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 for instance with vm_state active and task_state None.
Oct 09 10:03:17 compute-1 podman[172020]: 2025-10-09 10:03:17.554962845 +0000 UTC m=+0.059464195 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.858 2 DEBUG nova.compute.manager [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.858 2 DEBUG nova.compute.manager [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing instance network info cache due to event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.858 2 DEBUG oslo_concurrency.lockutils [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.858 2 DEBUG oslo_concurrency.lockutils [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.859 2 DEBUG nova.network.neutron [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 10:03:18 compute-1 nova_compute[162974]: 2025-10-09 10:03:18.001 2 INFO nova.compute.manager [None req-b145a251-9669-4ba9-beb2-44183637d49e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Get console output
Oct 09 10:03:18 compute-1 nova_compute[162974]: 2025-10-09 10:03:18.006 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 09 10:03:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:18.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:19 compute-1 ceph-mon[9795]: pgmap v907: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct 09 10:03:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:19.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.294 2 DEBUG nova.compute.manager [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.294 2 DEBUG oslo_concurrency.lockutils [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.294 2 DEBUG oslo_concurrency.lockutils [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.295 2 DEBUG oslo_concurrency.lockutils [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.295 2 DEBUG nova.compute.manager [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] No waiting events found dispatching network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.295 2 WARNING nova.compute.manager [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received unexpected event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 for instance with vm_state active and task_state None.
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.295 2 DEBUG nova.compute.manager [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.295 2 DEBUG oslo_concurrency.lockutils [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.295 2 DEBUG oslo_concurrency.lockutils [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.295 2 DEBUG oslo_concurrency.lockutils [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.296 2 DEBUG nova.compute.manager [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] No waiting events found dispatching network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.296 2 WARNING nova.compute.manager [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received unexpected event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 for instance with vm_state active and task_state None.
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.405 2 DEBUG nova.network.neutron [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updated VIF entry in instance network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.406 2 DEBUG nova.network.neutron [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.422 2 DEBUG oslo_concurrency.lockutils [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:03:20 compute-1 nova_compute[162974]: 2025-10-09 10:03:20.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:03:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:20.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:21 compute-1 ceph-mon[9795]: pgmap v908: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct 09 10:03:21 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2378775170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:21.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:22.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:22 compute-1 nova_compute[162974]: 2025-10-09 10:03:22.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.042 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.042 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.042 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.042 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.043 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.044 2 INFO nova.compute.manager [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Terminating instance
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.045 2 DEBUG nova.compute.manager [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 09 10:03:23 compute-1 kernel: tap52ec2db5-2e (unregistering): left promiscuous mode
Oct 09 10:03:23 compute-1 NetworkManager[982]: <info>  [1760004203.0847] device (tap52ec2db5-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 10:03:23 compute-1 ovn_controller[62080]: 2025-10-09T10:03:23Z|00092|binding|INFO|Releasing lport 52ec2db5-2e22-45a7-92ee-f0e360776c10 from this chassis (sb_readonly=0)
Oct 09 10:03:23 compute-1 ovn_controller[62080]: 2025-10-09T10:03:23Z|00093|binding|INFO|Setting lport 52ec2db5-2e22-45a7-92ee-f0e360776c10 down in Southbound
Oct 09 10:03:23 compute-1 ovn_controller[62080]: 2025-10-09T10:03:23Z|00094|binding|INFO|Removing iface tap52ec2db5-2e ovn-installed in OVS
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.101 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:24:81 10.100.0.8'], port_security=['fa:16:3e:19:24:81 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '21bbcca2-5cec-4324-9af4-6d2090b6b113', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e36da7d-913d-4101-a7c2-e1698abf35be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9cb722ba-1853-4a45-bd00-f5690460099e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e49a2e1f-bde0-4698-a31c-366cd4b00fe5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=52ec2db5-2e22-45a7-92ee-f0e360776c10) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.102 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 52ec2db5-2e22-45a7-92ee-f0e360776c10 in datapath 7e36da7d-913d-4101-a7c2-e1698abf35be unbound from our chassis
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.103 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e36da7d-913d-4101-a7c2-e1698abf35be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.104 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[7a24e2c8-1b32-4a01-8a1d-0069146b07c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.106 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be namespace which is not needed anymore
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:23 compute-1 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 09 10:03:23 compute-1 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Consumed 11.634s CPU time.
Oct 09 10:03:23 compute-1 systemd-machined[120683]: Machine qemu-6-instance-0000000b terminated.
Oct 09 10:03:23 compute-1 ceph-mon[9795]: pgmap v909: 337 pgs: 337 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 09 10:03:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:23.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:23 compute-1 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[171819]: [NOTICE]   (171823) : haproxy version is 2.8.14-c23fe91
Oct 09 10:03:23 compute-1 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[171819]: [NOTICE]   (171823) : path to executable is /usr/sbin/haproxy
Oct 09 10:03:23 compute-1 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[171819]: [ALERT]    (171823) : Current worker (171825) exited with code 143 (Terminated)
Oct 09 10:03:23 compute-1 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[171819]: [WARNING]  (171823) : All workers exited. Exiting... (0)
Oct 09 10:03:23 compute-1 systemd[1]: libpod-1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2.scope: Deactivated successfully.
Oct 09 10:03:23 compute-1 podman[172066]: 2025-10-09 10:03:23.228075841 +0000 UTC m=+0.042111401 container died 1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 10:03:23 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2-userdata-shm.mount: Deactivated successfully.
Oct 09 10:03:23 compute-1 systemd[1]: var-lib-containers-storage-overlay-c62e9def9ad6686c7da16f6d7f5c0040e366e3403846fd843dae5358e993db42-merged.mount: Deactivated successfully.
Oct 09 10:03:23 compute-1 podman[172066]: 2025-10-09 10:03:23.2550432 +0000 UTC m=+0.069078760 container cleanup 1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:03:23 compute-1 kernel: tap52ec2db5-2e: entered promiscuous mode
Oct 09 10:03:23 compute-1 NetworkManager[982]: <info>  [1760004203.2577] manager: (tap52ec2db5-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:23 compute-1 kernel: tap52ec2db5-2e (unregistering): left promiscuous mode
Oct 09 10:03:23 compute-1 ovn_controller[62080]: 2025-10-09T10:03:23Z|00095|binding|INFO|Claiming lport 52ec2db5-2e22-45a7-92ee-f0e360776c10 for this chassis.
Oct 09 10:03:23 compute-1 ovn_controller[62080]: 2025-10-09T10:03:23Z|00096|binding|INFO|52ec2db5-2e22-45a7-92ee-f0e360776c10: Claiming fa:16:3e:19:24:81 10.100.0.8
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.275 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:24:81 10.100.0.8'], port_security=['fa:16:3e:19:24:81 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '21bbcca2-5cec-4324-9af4-6d2090b6b113', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e36da7d-913d-4101-a7c2-e1698abf35be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9cb722ba-1853-4a45-bd00-f5690460099e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e49a2e1f-bde0-4698-a31c-366cd4b00fe5, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=52ec2db5-2e22-45a7-92ee-f0e360776c10) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:03:23 compute-1 ovn_controller[62080]: 2025-10-09T10:03:23Z|00097|binding|INFO|Releasing lport 52ec2db5-2e22-45a7-92ee-f0e360776c10 from this chassis (sb_readonly=0)
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.289 2 INFO nova.virt.libvirt.driver [-] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Instance destroyed successfully.
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.289 2 DEBUG nova.objects.instance [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid 21bbcca2-5cec-4324-9af4-6d2090b6b113 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.293 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:24:81 10.100.0.8'], port_security=['fa:16:3e:19:24:81 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '21bbcca2-5cec-4324-9af4-6d2090b6b113', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e36da7d-913d-4101-a7c2-e1698abf35be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9cb722ba-1853-4a45-bd00-f5690460099e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e49a2e1f-bde0-4698-a31c-366cd4b00fe5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=52ec2db5-2e22-45a7-92ee-f0e360776c10) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:03:23 compute-1 systemd[1]: libpod-conmon-1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2.scope: Deactivated successfully.
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.298 2 DEBUG nova.virt.libvirt.vif [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T10:02:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-670315443',display_name='tempest-TestNetworkBasicOps-server-670315443',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-670315443',id=11,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMC7xAI/YYK+cbn0PHRoxiiahdIQdKccwfERXZSRnLEKnS9i37SYurywRQCZNQPHgGjlY2G9Hgc0qmCz+iCo4fLyxnirlBRGL3WmP1CDMLNiBavqZTIOedAyGcrchrWbVA==',key_name='tempest-TestNetworkBasicOps-462284814',keypairs=<?>,launch_index=0,launched_at=2025-10-09T10:02:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-wq2l0ql1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T10:02:41Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=21bbcca2-5cec-4324-9af4-6d2090b6b113,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.298 2 DEBUG nova.network.os_vif_util [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.299 2 DEBUG nova.network.os_vif_util [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:24:81,bridge_name='br-int',has_traffic_filtering=True,id=52ec2db5-2e22-45a7-92ee-f0e360776c10,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52ec2db5-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.299 2 DEBUG os_vif [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:24:81,bridge_name='br-int',has_traffic_filtering=True,id=52ec2db5-2e22-45a7-92ee-f0e360776c10,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52ec2db5-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52ec2db5-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.306 2 INFO os_vif [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:24:81,bridge_name='br-int',has_traffic_filtering=True,id=52ec2db5-2e22-45a7-92ee-f0e360776c10,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52ec2db5-2e')
Oct 09 10:03:23 compute-1 podman[172096]: 2025-10-09 10:03:23.326416122 +0000 UTC m=+0.046259159 container remove 1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.331 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0b7e44-d6a3-4d0b-8ad5-8ba81a9bf66b]: (4, ('Thu Oct  9 10:03:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be (1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2)\n1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2\nThu Oct  9 10:03:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be (1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2)\n1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.332 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c02ad1-9724-41ea-8c3a-0633e72384a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.333 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e36da7d-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:03:23 compute-1 kernel: tap7e36da7d-90: left promiscuous mode
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.353 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[1ede6934-2d05-46f7-a57e-b3a7dfe185a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.370 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[732c4fbc-b310-4922-a618-445cffd226de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.371 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f62697-f7f3-4c22-94c5-0da840efc786]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.386 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6092d29e-683b-48c6-bc34-8683767dd9e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 184434, 'reachable_time': 24754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 172132, 'error': None, 'target': 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:23 compute-1 systemd[1]: run-netns-ovnmeta\x2d7e36da7d\x2d913d\x2d4101\x2da7c2\x2de1698abf35be.mount: Deactivated successfully.
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.397 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.398 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[268f9c8a-916c-4571-b7e7-ede349737ddf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.399 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 52ec2db5-2e22-45a7-92ee-f0e360776c10 in datapath 7e36da7d-913d-4101-a7c2-e1698abf35be unbound from our chassis
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.400 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e36da7d-913d-4101-a7c2-e1698abf35be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.400 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[f6fe3a25-6955-4e76-9479-9aabe42e3088]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.401 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 52ec2db5-2e22-45a7-92ee-f0e360776c10 in datapath 7e36da7d-913d-4101-a7c2-e1698abf35be unbound from our chassis
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.402 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e36da7d-913d-4101-a7c2-e1698abf35be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 09 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.402 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[3c05aa04-ac5f-4933-8958-6730f2bfbb63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.459 2 DEBUG nova.compute.manager [req-15875846-1b95-44e3-94b9-b92a0795f112 req-1c286d20-c08b-4e2b-8854-7f895cf89a28 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-unplugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.459 2 DEBUG oslo_concurrency.lockutils [req-15875846-1b95-44e3-94b9-b92a0795f112 req-1c286d20-c08b-4e2b-8854-7f895cf89a28 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.459 2 DEBUG oslo_concurrency.lockutils [req-15875846-1b95-44e3-94b9-b92a0795f112 req-1c286d20-c08b-4e2b-8854-7f895cf89a28 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.460 2 DEBUG oslo_concurrency.lockutils [req-15875846-1b95-44e3-94b9-b92a0795f112 req-1c286d20-c08b-4e2b-8854-7f895cf89a28 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.460 2 DEBUG nova.compute.manager [req-15875846-1b95-44e3-94b9-b92a0795f112 req-1c286d20-c08b-4e2b-8854-7f895cf89a28 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] No waiting events found dispatching network-vif-unplugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.460 2 DEBUG nova.compute.manager [req-15875846-1b95-44e3-94b9-b92a0795f112 req-1c286d20-c08b-4e2b-8854-7f895cf89a28 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-unplugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.491 2 INFO nova.virt.libvirt.driver [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Deleting instance files /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113_del
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.492 2 INFO nova.virt.libvirt.driver [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Deletion of /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113_del complete
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.526 2 INFO nova.compute.manager [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Took 0.48 seconds to destroy the instance on the hypervisor.
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.526 2 DEBUG oslo.service.loopingcall [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.527 2 DEBUG nova.compute.manager [-] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 09 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.527 2 DEBUG nova.network.neutron [-] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 09 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.260 2 DEBUG nova.compute.manager [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.260 2 DEBUG nova.compute.manager [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing instance network info cache due to event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.260 2 DEBUG oslo_concurrency.lockutils [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.261 2 DEBUG oslo_concurrency.lockutils [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.261 2 DEBUG nova.network.neutron [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 10:03:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:24.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.884 2 DEBUG nova.network.neutron [-] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.896 2 INFO nova.compute.manager [-] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Took 1.37 seconds to deallocate network for instance.
Oct 09 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.930 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.930 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.981 2 DEBUG oslo_concurrency.processutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:03:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:25.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:25 compute-1 ceph-mon[9795]: pgmap v910: 337 pgs: 337 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Oct 09 10:03:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:03:25 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2459753808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.330 2 DEBUG oslo_concurrency.processutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.335 2 DEBUG nova.compute.provider_tree [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.346 2 DEBUG nova.scheduler.client.report [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.367 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.393 2 INFO nova.scheduler.client.report [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance 21bbcca2-5cec-4324-9af4-6d2090b6b113
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.445 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.503 2 DEBUG nova.network.neutron [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updated VIF entry in instance network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.504 2 DEBUG nova.network.neutron [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.521 2 DEBUG oslo_concurrency.lockutils [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.548 2 DEBUG nova.compute.manager [req-16dab8ea-d6b6-4e55-933a-ee07e2268f37 req-e4d27751-8415-4a9f-8bf6-45383bbd0058 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.548 2 DEBUG oslo_concurrency.lockutils [req-16dab8ea-d6b6-4e55-933a-ee07e2268f37 req-e4d27751-8415-4a9f-8bf6-45383bbd0058 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.548 2 DEBUG oslo_concurrency.lockutils [req-16dab8ea-d6b6-4e55-933a-ee07e2268f37 req-e4d27751-8415-4a9f-8bf6-45383bbd0058 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.549 2 DEBUG oslo_concurrency.lockutils [req-16dab8ea-d6b6-4e55-933a-ee07e2268f37 req-e4d27751-8415-4a9f-8bf6-45383bbd0058 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.549 2 DEBUG nova.compute.manager [req-16dab8ea-d6b6-4e55-933a-ee07e2268f37 req-e4d27751-8415-4a9f-8bf6-45383bbd0058 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] No waiting events found dispatching network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.549 2 WARNING nova.compute.manager [req-16dab8ea-d6b6-4e55-933a-ee07e2268f37 req-e4d27751-8415-4a9f-8bf6-45383bbd0058 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received unexpected event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 for instance with vm_state deleted and task_state None.
Oct 09 10:03:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:26 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2459753808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:26 compute-1 nova_compute[162974]: 2025-10-09 10:03:26.325 2 DEBUG nova.compute.manager [req-6afb7f98-58f2-467d-a4ed-b494b6a1bdf3 req-16ea68e0-622d-41f5-9c3f-61b1876903e0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-deleted-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:03:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 10:03:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:26.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 10:03:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:27.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:27 compute-1 ceph-mon[9795]: pgmap v911: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 13 KiB/s wr, 58 op/s
Oct 09 10:03:27 compute-1 podman[172159]: 2025-10-09 10:03:27.54224928 +0000 UTC m=+0.053141413 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:03:27 compute-1 nova_compute[162974]: 2025-10-09 10:03:27.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:27 compute-1 nova_compute[162974]: 2025-10-09 10:03:27.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:27 compute-1 nova_compute[162974]: 2025-10-09 10:03:27.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:28 compute-1 nova_compute[162974]: 2025-10-09 10:03:28.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:28.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:29.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:29 compute-1 ceph-mon[9795]: pgmap v912: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 09 10:03:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:30.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:31.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:31 compute-1 ceph-mon[9795]: pgmap v913: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 09 10:03:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:32.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:32 compute-1 nova_compute[162974]: 2025-10-09 10:03:32.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:33 compute-1 sudo[172180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:03:33 compute-1 sudo[172180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:03:33 compute-1 sudo[172180]: pam_unix(sudo:session): session closed for user root
Oct 09 10:03:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:33.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:33 compute-1 ceph-mon[9795]: pgmap v914: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 12 KiB/s wr, 58 op/s
Oct 09 10:03:33 compute-1 nova_compute[162974]: 2025-10-09 10:03:33.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:34.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:03:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:35.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:03:35 compute-1 ceph-mon[9795]: pgmap v915: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Oct 09 10:03:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:03:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:36.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:36 compute-1 podman[172208]: 2025-10-09 10:03:36.540611617 +0000 UTC m=+0.044342967 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 09 10:03:36 compute-1 podman[172207]: 2025-10-09 10:03:36.569385402 +0000 UTC m=+0.073997523 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 09 10:03:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:37.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:37 compute-1 ceph-mon[9795]: pgmap v916: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Oct 09 10:03:37 compute-1 nova_compute[162974]: 2025-10-09 10:03:37.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:37 compute-1 sudo[172241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:03:37 compute-1 sudo[172241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:03:37 compute-1 sudo[172241]: pam_unix(sudo:session): session closed for user root
Oct 09 10:03:38 compute-1 sudo[172266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:03:38 compute-1 sudo[172266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:03:38 compute-1 nova_compute[162974]: 2025-10-09 10:03:38.280 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760004203.2788818, 21bbcca2-5cec-4324-9af4-6d2090b6b113 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 10:03:38 compute-1 nova_compute[162974]: 2025-10-09 10:03:38.280 2 INFO nova.compute.manager [-] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] VM Stopped (Lifecycle Event)
Oct 09 10:03:38 compute-1 nova_compute[162974]: 2025-10-09 10:03:38.294 2 DEBUG nova.compute.manager [None req-2bdbf53d-9a8c-4257-91b8-dc4eedd773fc - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:03:38 compute-1 nova_compute[162974]: 2025-10-09 10:03:38.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:38.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:38 compute-1 sudo[172266]: pam_unix(sudo:session): session closed for user root
Oct 09 10:03:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:39.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:39 compute-1 ceph-mon[9795]: pgmap v917: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:03:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:03:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:03:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:03:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:03:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:03:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:03:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.166 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.166 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.177 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.228 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.228 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.232 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.232 2 INFO nova.compute.claims [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Claim successful on node compute-1.ctlplane.example.com
Oct 09 10:03:40 compute-1 ceph-mon[9795]: pgmap v918: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.297 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:03:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:40.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:03:40 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4048734863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.670 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.674 2 DEBUG nova.compute.provider_tree [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.684 2 DEBUG nova.scheduler.client.report [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.700 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.700 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.729 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.729 2 DEBUG nova.network.neutron [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.742 2 INFO nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.756 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 09 10:03:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.908 2 DEBUG nova.policy [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.912 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.913 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.913 2 INFO nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Creating image(s)
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.939 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.958 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.976 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.979 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.024 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.025 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.026 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.026 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.045 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.047 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.195 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:03:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:41.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.254 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 09 10:03:41 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/4048734863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.329 2 DEBUG nova.objects.instance [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid 29f00e1c-dcdd-4a28-b141-a900eb34b836 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.339 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.340 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Ensure instance console log exists: /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.340 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.340 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.340 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.351 2 DEBUG nova.network.neutron [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Successfully created port: a450260b-c4da-4f56-bf08-713a5ccc3d0e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.192 2 DEBUG nova.network.neutron [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Successfully updated port: a450260b-c4da-4f56-bf08-713a5ccc3d0e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.212 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.212 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.212 2 DEBUG nova.network.neutron [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.264 2 DEBUG nova.compute.manager [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-changed-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.264 2 DEBUG nova.compute.manager [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Refreshing instance network info cache due to event network-changed-a450260b-c4da-4f56-bf08-713a5ccc3d0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.264 2 DEBUG oslo_concurrency.lockutils [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:03:42 compute-1 ceph-mon[9795]: pgmap v919: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.328 2 DEBUG nova.network.neutron [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 09 10:03:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:42.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:42 compute-1 sudo[172510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:03:42 compute-1 sudo[172510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:03:42 compute-1 sudo[172510]: pam_unix(sudo:session): session closed for user root
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.725 2 DEBUG nova.network.neutron [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updating instance_info_cache with network_info: [{"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.741 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.741 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Instance network_info: |[{"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.742 2 DEBUG oslo_concurrency.lockutils [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.742 2 DEBUG nova.network.neutron [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Refreshing network info cache for port a450260b-c4da-4f56-bf08-713a5ccc3d0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.744 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Start _get_guest_xml network_info=[{"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.747 2 WARNING nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.753 2 DEBUG nova.virt.libvirt.host [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.753 2 DEBUG nova.virt.libvirt.host [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.756 2 DEBUG nova.virt.libvirt.host [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.756 2 DEBUG nova.virt.libvirt.host [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.757 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.757 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.757 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.758 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.758 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.758 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.758 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.758 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.759 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.759 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.759 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.759 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.761 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:43 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 10:03:43 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4018809995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.108 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.126 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.129 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:03:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:43.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:03:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:03:43 compute-1 ceph-mon[9795]: pgmap v920: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:03:43 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/4018809995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.459 2 DEBUG nova.network.neutron [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updated VIF entry in instance network info cache for port a450260b-c4da-4f56-bf08-713a5ccc3d0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.460 2 DEBUG nova.network.neutron [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updating instance_info_cache with network_info: [{"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:03:43 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 10:03:43 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4110228176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.470 2 DEBUG oslo_concurrency.lockutils [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.477 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.478 2 DEBUG nova.virt.libvirt.vif [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:03:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-697662314',display_name='tempest-TestNetworkBasicOps-server-697662314',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-697662314',id=13,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtKLJ6IG9u4a8nuHneFynw1vBGpmAOOthC0luN75md/pSNPLJ1OiBs1QaWTfRgLBRYBcOf7wBzJd4+LCaHfI9OClhJh7S3mGctEWrkgZF/O/aOkt4rBN7LklD620tBk2Q==',key_name='tempest-TestNetworkBasicOps-110254677',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-4alywc2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:03:40Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=29f00e1c-dcdd-4a28-b141-a900eb34b836,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.478 2 DEBUG nova.network.os_vif_util [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.479 2 DEBUG nova.network.os_vif_util [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:02:fe,bridge_name='br-int',has_traffic_filtering=True,id=a450260b-c4da-4f56-bf08-713a5ccc3d0e,network=Network(a733533a-76c7-46e6-89e3-803597fe93b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa450260b-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.480 2 DEBUG nova.objects.instance [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29f00e1c-dcdd-4a28-b141-a900eb34b836 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.489 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] End _get_guest_xml xml=<domain type="kvm">
Oct 09 10:03:43 compute-1 nova_compute[162974]:   <uuid>29f00e1c-dcdd-4a28-b141-a900eb34b836</uuid>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   <name>instance-0000000d</name>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   <memory>131072</memory>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   <vcpu>1</vcpu>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   <metadata>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <nova:name>tempest-TestNetworkBasicOps-server-697662314</nova:name>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <nova:creationTime>2025-10-09 10:03:42</nova:creationTime>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <nova:flavor name="m1.nano">
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <nova:memory>128</nova:memory>
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <nova:disk>1</nova:disk>
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <nova:swap>0</nova:swap>
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <nova:vcpus>1</nova:vcpus>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       </nova:flavor>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <nova:owner>
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       </nova:owner>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <nova:ports>
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <nova:port uuid="a450260b-c4da-4f56-bf08-713a5ccc3d0e">
Oct 09 10:03:43 compute-1 nova_compute[162974]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:         </nova:port>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       </nova:ports>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     </nova:instance>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   </metadata>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   <sysinfo type="smbios">
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <system>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <entry name="manufacturer">RDO</entry>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <entry name="product">OpenStack Compute</entry>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <entry name="serial">29f00e1c-dcdd-4a28-b141-a900eb34b836</entry>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <entry name="uuid">29f00e1c-dcdd-4a28-b141-a900eb34b836</entry>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <entry name="family">Virtual Machine</entry>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     </system>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   </sysinfo>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   <os>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <boot dev="hd"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <smbios mode="sysinfo"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   </os>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   <features>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <acpi/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <apic/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <vmcoreinfo/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   </features>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   <clock offset="utc">
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <timer name="hpet" present="no"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   </clock>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   <cpu mode="host-model" match="exact">
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   </cpu>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   <devices>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <disk type="network" device="disk">
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <driver type="raw" cache="none"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <source protocol="rbd" name="vms/29f00e1c-dcdd-4a28-b141-a900eb34b836_disk">
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <host name="192.168.122.100" port="6789"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <host name="192.168.122.102" port="6789"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <host name="192.168.122.101" port="6789"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       </source>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <auth username="openstack">
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       </auth>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <target dev="vda" bus="virtio"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     </disk>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <disk type="network" device="cdrom">
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <driver type="raw" cache="none"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <source protocol="rbd" name="vms/29f00e1c-dcdd-4a28-b141-a900eb34b836_disk.config">
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <host name="192.168.122.100" port="6789"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <host name="192.168.122.102" port="6789"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <host name="192.168.122.101" port="6789"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       </source>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <auth username="openstack">
Oct 09 10:03:43 compute-1 nova_compute[162974]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       </auth>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <target dev="sda" bus="sata"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     </disk>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <interface type="ethernet">
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <mac address="fa:16:3e:ac:02:fe"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <model type="virtio"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <mtu size="1442"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <target dev="tapa450260b-c4"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     </interface>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <serial type="pty">
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <log file="/var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/console.log" append="off"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     </serial>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <video>
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <model type="virtio"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     </video>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <input type="tablet" bus="usb"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <rng model="virtio">
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <backend model="random">/dev/urandom</backend>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     </rng>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <controller type="usb" index="0"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     <memballoon model="virtio">
Oct 09 10:03:43 compute-1 nova_compute[162974]:       <stats period="10"/>
Oct 09 10:03:43 compute-1 nova_compute[162974]:     </memballoon>
Oct 09 10:03:43 compute-1 nova_compute[162974]:   </devices>
Oct 09 10:03:43 compute-1 nova_compute[162974]: </domain>
Oct 09 10:03:43 compute-1 nova_compute[162974]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.490 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Preparing to wait for external event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.491 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.491 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.491 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.492 2 DEBUG nova.virt.libvirt.vif [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:03:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-697662314',display_name='tempest-TestNetworkBasicOps-server-697662314',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-697662314',id=13,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtKLJ6IG9u4a8nuHneFynw1vBGpmAOOthC0luN75md/pSNPLJ1OiBs1QaWTfRgLBRYBcOf7wBzJd4+LCaHfI9OClhJh7S3mGctEWrkgZF/O/aOkt4rBN7LklD620tBk2Q==',key_name='tempest-TestNetworkBasicOps-110254677',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-4alywc2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:03:40Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=29f00e1c-dcdd-4a28-b141-a900eb34b836,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.492 2 DEBUG nova.network.os_vif_util [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.492 2 DEBUG nova.network.os_vif_util [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:02:fe,bridge_name='br-int',has_traffic_filtering=True,id=a450260b-c4da-4f56-bf08-713a5ccc3d0e,network=Network(a733533a-76c7-46e6-89e3-803597fe93b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa450260b-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.493 2 DEBUG os_vif [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:02:fe,bridge_name='br-int',has_traffic_filtering=True,id=a450260b-c4da-4f56-bf08-713a5ccc3d0e,network=Network(a733533a-76c7-46e6-89e3-803597fe93b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa450260b-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.493 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.494 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa450260b-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa450260b-c4, col_values=(('external_ids', {'iface-id': 'a450260b-c4da-4f56-bf08-713a5ccc3d0e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:02:fe', 'vm-uuid': '29f00e1c-dcdd-4a28-b141-a900eb34b836'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:43 compute-1 NetworkManager[982]: <info>  [1760004223.4980] manager: (tapa450260b-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.503 2 INFO os_vif [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:02:fe,bridge_name='br-int',has_traffic_filtering=True,id=a450260b-c4da-4f56-bf08-713a5ccc3d0e,network=Network(a733533a-76c7-46e6-89e3-803597fe93b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa450260b-c4')
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.536 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.537 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.537 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:ac:02:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.537 2 INFO nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Using config drive
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.554 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.748 2 INFO nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Creating config drive at /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/disk.config
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.752 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9d6qlqge execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.871 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9d6qlqge" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.892 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.895 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/disk.config 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.975 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/disk.config 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.976 2 INFO nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Deleting local config drive /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/disk.config because it was imported into RBD.
Oct 09 10:03:44 compute-1 kernel: tapa450260b-c4: entered promiscuous mode
Oct 09 10:03:44 compute-1 NetworkManager[982]: <info>  [1760004224.0136] manager: (tapa450260b-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:44 compute-1 ovn_controller[62080]: 2025-10-09T10:03:44Z|00098|binding|INFO|Claiming lport a450260b-c4da-4f56-bf08-713a5ccc3d0e for this chassis.
Oct 09 10:03:44 compute-1 ovn_controller[62080]: 2025-10-09T10:03:44Z|00099|binding|INFO|a450260b-c4da-4f56-bf08-713a5ccc3d0e: Claiming fa:16:3e:ac:02:fe 10.100.0.4
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.024 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:02:fe 10.100.0.4'], port_security=['fa:16:3e:ac:02:fe 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29f00e1c-dcdd-4a28-b141-a900eb34b836', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a733533a-76c7-46e6-89e3-803597fe93b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '68293693-d770-49bf-b0b3-d26af71ce606', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0fc8e9a-af61-4397-9551-67e71824e91c, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=a450260b-c4da-4f56-bf08-713a5ccc3d0e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.025 71059 INFO neutron.agent.ovn.metadata.agent [-] Port a450260b-c4da-4f56-bf08-713a5ccc3d0e in datapath a733533a-76c7-46e6-89e3-803597fe93b6 bound to our chassis
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.026 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a733533a-76c7-46e6-89e3-803597fe93b6
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.035 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[74362613-bbf8-4ba2-af4d-a1d9d16eadc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.035 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa733533a-71 in ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.036 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa733533a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.036 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6495e9b4-d44d-441e-ab1f-5048118fe367]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.037 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c65b6dd1-cbea-4a6b-9f8e-8a5190c0a6bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 systemd-udevd[172671]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.047 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[7b0b4023-7743-4e47-a50c-fb8bfcd3365f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 NetworkManager[982]: <info>  [1760004224.0497] device (tapa450260b-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 10:03:44 compute-1 NetworkManager[982]: <info>  [1760004224.0502] device (tapa450260b-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 10:03:44 compute-1 systemd-machined[120683]: New machine qemu-7-instance-0000000d.
Oct 09 10:03:44 compute-1 systemd[1]: Started Virtual Machine qemu-7-instance-0000000d.
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.068 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[524c918e-dad2-4d9f-8923-be7a0efc85a1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.091 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1b2d95-dc86-496a-abea-86c256a791a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 ovn_controller[62080]: 2025-10-09T10:03:44Z|00100|binding|INFO|Setting lport a450260b-c4da-4f56-bf08-713a5ccc3d0e ovn-installed in OVS
Oct 09 10:03:44 compute-1 ovn_controller[62080]: 2025-10-09T10:03:44Z|00101|binding|INFO|Setting lport a450260b-c4da-4f56-bf08-713a5ccc3d0e up in Southbound
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:44 compute-1 NetworkManager[982]: <info>  [1760004224.0984] manager: (tapa733533a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/70)
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.099 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd59227-c5d4-4241-a0a4-a4e2b6cb577a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.125 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[5567ecb0-fc23-47ec-87c7-b0ebe85c5142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.127 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[ced6e277-696b-47c1-bdf2-889cda6fb2fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 NetworkManager[982]: <info>  [1760004224.1457] device (tapa733533a-70): carrier: link connected
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.149 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[c52d8601-c391-405a-a010-404347fb3a02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.160 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2daa3b-69a0-4c8e-9b87-c8513128c4eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa733533a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:b0:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 190771, 'reachable_time': 22880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 172696, 'error': None, 'target': 'ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.170 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[1d5e0d80-5d46-405e-967e-ef808b652956]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:b04f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 190771, 'tstamp': 190771}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 172697, 'error': None, 'target': 'ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.182 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[b8ef2715-61b4-4200-8d98-87ac8c5a02a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa733533a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:b0:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 190771, 'reachable_time': 22880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 172698, 'error': None, 'target': 'ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.203 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[183a32e5-9c7c-4504-b5d5-cc16a4f187bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.241 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[db3f07b2-1a71-47c8-8959-712cf555b9ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.242 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa733533a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.242 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.243 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa733533a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:44 compute-1 kernel: tapa733533a-70: entered promiscuous mode
Oct 09 10:03:44 compute-1 NetworkManager[982]: <info>  [1760004224.2451] manager: (tapa733533a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.249 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa733533a-70, col_values=(('external_ids', {'iface-id': '56190dc5-983f-4623-a0ae-120f81d9f7de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:44 compute-1 ovn_controller[62080]: 2025-10-09T10:03:44Z|00102|binding|INFO|Releasing lport 56190dc5-983f-4623-a0ae-120f81d9f7de from this chassis (sb_readonly=0)
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.253 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a733533a-76c7-46e6-89e3-803597fe93b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a733533a-76c7-46e6-89e3-803597fe93b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.254 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[fdfb086b-61cf-4f4a-97da-dc1549d143d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.254 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: global
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     log         /dev/log local0 debug
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     log-tag     haproxy-metadata-proxy-a733533a-76c7-46e6-89e3-803597fe93b6
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     user        root
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     group       root
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     maxconn     1024
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     pidfile     /var/lib/neutron/external/pids/a733533a-76c7-46e6-89e3-803597fe93b6.pid.haproxy
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     daemon
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: defaults
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     log global
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     mode http
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     option httplog
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     option dontlognull
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     option http-server-close
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     option forwardfor
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     retries                 3
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     timeout http-request    30s
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     timeout connect         30s
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     timeout client          32s
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     timeout server          32s
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     timeout http-keep-alive 30s
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: listen listener
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     bind 169.254.169.254:80
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:     http-request add-header X-OVN-Network-ID a733533a-76c7-46e6-89e3-803597fe93b6
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 09 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.255 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6', 'env', 'PROCESS_TAG=haproxy-a733533a-76c7-46e6-89e3-803597fe93b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a733533a-76c7-46e6-89e3-803597fe93b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.322 2 DEBUG nova.compute.manager [req-1c5add77-d5aa-41d5-a445-79623590e2ad req-79664bef-13da-4291-9b42-6b78d42f11d3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.324 2 DEBUG oslo_concurrency.lockutils [req-1c5add77-d5aa-41d5-a445-79623590e2ad req-79664bef-13da-4291-9b42-6b78d42f11d3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.324 2 DEBUG oslo_concurrency.lockutils [req-1c5add77-d5aa-41d5-a445-79623590e2ad req-79664bef-13da-4291-9b42-6b78d42f11d3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.324 2 DEBUG oslo_concurrency.lockutils [req-1c5add77-d5aa-41d5-a445-79623590e2ad req-79664bef-13da-4291-9b42-6b78d42f11d3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.325 2 DEBUG nova.compute.manager [req-1c5add77-d5aa-41d5-a445-79623590e2ad req-79664bef-13da-4291-9b42-6b78d42f11d3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Processing event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 09 10:03:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:44.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:44 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/4110228176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:03:44 compute-1 podman[172768]: 2025-10-09 10:03:44.549138215 +0000 UTC m=+0.032537350 container create 440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 09 10:03:44 compute-1 systemd[1]: Started libpod-conmon-440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0.scope.
Oct 09 10:03:44 compute-1 systemd[1]: Started libcrun container.
Oct 09 10:03:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5124fff80fb25973ea1610265ddf668756c8cb9c257f51808250cf95045e143/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 10:03:44 compute-1 podman[172768]: 2025-10-09 10:03:44.599981656 +0000 UTC m=+0.083380790 container init 440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 10:03:44 compute-1 podman[172768]: 2025-10-09 10:03:44.604885902 +0000 UTC m=+0.088285036 container start 440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 09 10:03:44 compute-1 podman[172768]: 2025-10-09 10:03:44.534810064 +0000 UTC m=+0.018209208 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 10:03:44 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [NOTICE]   (172784) : New worker (172786) forked
Oct 09 10:03:44 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [NOTICE]   (172784) : Loading success.
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.726 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.727 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004224.7257159, 29f00e1c-dcdd-4a28-b141-a900eb34b836 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.727 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] VM Started (Lifecycle Event)
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.730 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.732 2 INFO nova.virt.libvirt.driver [-] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Instance spawned successfully.
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.732 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.742 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.746 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.750 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.750 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.750 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.751 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.751 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.752 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.766 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.766 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004224.725822, 29f00e1c-dcdd-4a28-b141-a900eb34b836 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.766 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] VM Paused (Lifecycle Event)
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.784 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.786 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004224.7295604, 29f00e1c-dcdd-4a28-b141-a900eb34b836 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.787 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] VM Resumed (Lifecycle Event)
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.799 2 INFO nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Took 3.89 seconds to spawn the instance on the hypervisor.
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.799 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.800 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.806 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.833 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.849 2 INFO nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Took 4.64 seconds to build instance.
Oct 09 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.858 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:45.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:45 compute-1 ceph-mon[9795]: pgmap v921: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:03:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:46 compute-1 nova_compute[162974]: 2025-10-09 10:03:46.395 2 DEBUG nova.compute.manager [req-a6df0fa2-43e1-4da7-9f46-51e3443e0a82 req-4c0728df-fe47-4eb6-a139-184b5c99491a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:03:46 compute-1 nova_compute[162974]: 2025-10-09 10:03:46.396 2 DEBUG oslo_concurrency.lockutils [req-a6df0fa2-43e1-4da7-9f46-51e3443e0a82 req-4c0728df-fe47-4eb6-a139-184b5c99491a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:46 compute-1 nova_compute[162974]: 2025-10-09 10:03:46.396 2 DEBUG oslo_concurrency.lockutils [req-a6df0fa2-43e1-4da7-9f46-51e3443e0a82 req-4c0728df-fe47-4eb6-a139-184b5c99491a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:46 compute-1 nova_compute[162974]: 2025-10-09 10:03:46.396 2 DEBUG oslo_concurrency.lockutils [req-a6df0fa2-43e1-4da7-9f46-51e3443e0a82 req-4c0728df-fe47-4eb6-a139-184b5c99491a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:46 compute-1 nova_compute[162974]: 2025-10-09 10:03:46.396 2 DEBUG nova.compute.manager [req-a6df0fa2-43e1-4da7-9f46-51e3443e0a82 req-4c0728df-fe47-4eb6-a139-184b5c99491a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] No waiting events found dispatching network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:03:46 compute-1 nova_compute[162974]: 2025-10-09 10:03:46.397 2 WARNING nova.compute.manager [req-a6df0fa2-43e1-4da7-9f46-51e3443e0a82 req-4c0728df-fe47-4eb6-a139-184b5c99491a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received unexpected event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e for instance with vm_state active and task_state None.
Oct 09 10:03:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:46.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:47.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:47 compute-1 ovn_controller[62080]: 2025-10-09T10:03:47Z|00103|binding|INFO|Releasing lport 56190dc5-983f-4623-a0ae-120f81d9f7de from this chassis (sb_readonly=0)
Oct 09 10:03:47 compute-1 NetworkManager[982]: <info>  [1760004227.5131] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct 09 10:03:47 compute-1 NetworkManager[982]: <info>  [1760004227.5140] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct 09 10:03:47 compute-1 nova_compute[162974]: 2025-10-09 10:03:47.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:47 compute-1 ovn_controller[62080]: 2025-10-09T10:03:47Z|00104|binding|INFO|Releasing lport 56190dc5-983f-4623-a0ae-120f81d9f7de from this chassis (sb_readonly=0)
Oct 09 10:03:47 compute-1 nova_compute[162974]: 2025-10-09 10:03:47.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:47 compute-1 nova_compute[162974]: 2025-10-09 10:03:47.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:47 compute-1 ceph-mon[9795]: pgmap v922: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 104 op/s
Oct 09 10:03:47 compute-1 nova_compute[162974]: 2025-10-09 10:03:47.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:48.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:48 compute-1 nova_compute[162974]: 2025-10-09 10:03:48.446 2 DEBUG nova.compute.manager [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-changed-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:03:48 compute-1 nova_compute[162974]: 2025-10-09 10:03:48.447 2 DEBUG nova.compute.manager [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Refreshing instance network info cache due to event network-changed-a450260b-c4da-4f56-bf08-713a5ccc3d0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 10:03:48 compute-1 nova_compute[162974]: 2025-10-09 10:03:48.447 2 DEBUG oslo_concurrency.lockutils [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:03:48 compute-1 nova_compute[162974]: 2025-10-09 10:03:48.447 2 DEBUG oslo_concurrency.lockutils [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:03:48 compute-1 nova_compute[162974]: 2025-10-09 10:03:48.447 2 DEBUG nova.network.neutron [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Refreshing network info cache for port a450260b-c4da-4f56-bf08-713a5ccc3d0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 10:03:48 compute-1 nova_compute[162974]: 2025-10-09 10:03:48.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:48 compute-1 podman[172794]: 2025-10-09 10:03:48.553329074 +0000 UTC m=+0.064537409 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:03:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:49.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:49 compute-1 ceph-mon[9795]: pgmap v923: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 104 op/s
Oct 09 10:03:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:03:49 compute-1 nova_compute[162974]: 2025-10-09 10:03:49.713 2 DEBUG nova.network.neutron [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updated VIF entry in instance network info cache for port a450260b-c4da-4f56-bf08-713a5ccc3d0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 10:03:49 compute-1 nova_compute[162974]: 2025-10-09 10:03:49.714 2 DEBUG nova.network.neutron [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updating instance_info_cache with network_info: [{"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:03:49 compute-1 nova_compute[162974]: 2025-10-09 10:03:49.728 2 DEBUG oslo_concurrency.lockutils [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:03:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:50.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:51.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:51 compute-1 ceph-mon[9795]: pgmap v924: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 09 10:03:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:52.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:52 compute-1 nova_compute[162974]: 2025-10-09 10:03:52.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:53 compute-1 sudo[172819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:03:53 compute-1 sudo[172819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:03:53 compute-1 sudo[172819]: pam_unix(sudo:session): session closed for user root
Oct 09 10:03:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:53.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:53 compute-1 nova_compute[162974]: 2025-10-09 10:03:53.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:53 compute-1 ceph-mon[9795]: pgmap v925: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 10:03:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:54.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.131 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.131 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.132 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.132 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.132 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:03:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:55.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:03:55 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4083760399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.480 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.522 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.522 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 09 10:03:55 compute-1 ceph-mon[9795]: pgmap v926: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 10:03:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/4083760399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.724 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.725 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4841MB free_disk=59.96738052368164GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.725 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.725 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:55 compute-1 ovn_controller[62080]: 2025-10-09T10:03:55Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:02:fe 10.100.0.4
Oct 09 10:03:55 compute-1 ovn_controller[62080]: 2025-10-09T10:03:55Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:02:fe 10.100.0.4
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.773 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Instance 29f00e1c-dcdd-4a28-b141-a900eb34b836 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.773 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.773 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:03:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.797 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:03:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:03:56 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/139525667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:56 compute-1 nova_compute[162974]: 2025-10-09 10:03:56.136 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:03:56 compute-1 nova_compute[162974]: 2025-10-09 10:03:56.140 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:03:56 compute-1 nova_compute[162974]: 2025-10-09 10:03:56.153 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:03:56 compute-1 nova_compute[162974]: 2025-10-09 10:03:56.165 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:03:56 compute-1 nova_compute[162974]: 2025-10-09 10:03:56.166 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:56.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/139525667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/604600001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/325562166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:57 compute-1 nova_compute[162974]: 2025-10-09 10:03:57.165 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:57 compute-1 nova_compute[162974]: 2025-10-09 10:03:57.166 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:57 compute-1 nova_compute[162974]: 2025-10-09 10:03:57.166 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:57 compute-1 nova_compute[162974]: 2025-10-09 10:03:57.166 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:03:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:57.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:57 compute-1 ceph-mon[9795]: pgmap v927: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct 09 10:03:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2413348394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/526253334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:57 compute-1 nova_compute[162974]: 2025-10-09 10:03:57.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.225 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.226 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquired lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.226 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 09 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.226 2 DEBUG nova.objects.instance [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 29f00e1c-dcdd-4a28-b141-a900eb34b836 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 10:03:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:58.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:58 compute-1 podman[172892]: 2025-10-09 10:03:58.53218059 +0000 UTC m=+0.042938940 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid)
Oct 09 10:03:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:03:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:59.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:59 compute-1 nova_compute[162974]: 2025-10-09 10:03:59.354 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updating instance_info_cache with network_info: [{"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:03:59 compute-1 nova_compute[162974]: 2025-10-09 10:03:59.365 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Releasing lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:03:59 compute-1 nova_compute[162974]: 2025-10-09 10:03:59.366 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 09 10:03:59 compute-1 nova_compute[162974]: 2025-10-09 10:03:59.366 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:59 compute-1 nova_compute[162974]: 2025-10-09 10:03:59.366 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:59 compute-1 nova_compute[162974]: 2025-10-09 10:03:59.366 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:59 compute-1 ceph-mon[9795]: pgmap v928: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct 09 10:04:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:00.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:01.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:01 compute-1 ceph-mon[9795]: pgmap v929: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct 09 10:04:02 compute-1 nova_compute[162974]: 2025-10-09 10:04:02.029 2 INFO nova.compute.manager [None req-6b62f797-84dc-40aa-9241-278bd198f44a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Get console output
Oct 09 10:04:02 compute-1 nova_compute[162974]: 2025-10-09 10:04:02.033 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 09 10:04:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:02.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:02 compute-1 ovn_controller[62080]: 2025-10-09T10:04:02Z|00105|binding|INFO|Releasing lport 56190dc5-983f-4623-a0ae-120f81d9f7de from this chassis (sb_readonly=0)
Oct 09 10:04:02 compute-1 nova_compute[162974]: 2025-10-09 10:04:02.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:02 compute-1 ovn_controller[62080]: 2025-10-09T10:04:02Z|00106|binding|INFO|Releasing lport 56190dc5-983f-4623-a0ae-120f81d9f7de from this chassis (sb_readonly=0)
Oct 09 10:04:02 compute-1 nova_compute[162974]: 2025-10-09 10:04:02.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:02 compute-1 nova_compute[162974]: 2025-10-09 10:04:02.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:03.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:03 compute-1 nova_compute[162974]: 2025-10-09 10:04:03.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:03 compute-1 nova_compute[162974]: 2025-10-09 10:04:03.594 2 INFO nova.compute.manager [None req-1c13b45b-b245-474c-97cb-d987f2d83afc 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Get console output
Oct 09 10:04:03 compute-1 nova_compute[162974]: 2025-10-09 10:04:03.598 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 09 10:04:03 compute-1 ceph-mon[9795]: pgmap v930: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Oct 09 10:04:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:04.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:04 compute-1 NetworkManager[982]: <info>  [1760004244.4412] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Oct 09 10:04:04 compute-1 nova_compute[162974]: 2025-10-09 10:04:04.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:04 compute-1 NetworkManager[982]: <info>  [1760004244.4418] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Oct 09 10:04:04 compute-1 nova_compute[162974]: 2025-10-09 10:04:04.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:04 compute-1 ovn_controller[62080]: 2025-10-09T10:04:04Z|00107|binding|INFO|Releasing lport 56190dc5-983f-4623-a0ae-120f81d9f7de from this chassis (sb_readonly=0)
Oct 09 10:04:04 compute-1 nova_compute[162974]: 2025-10-09 10:04:04.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:04 compute-1 nova_compute[162974]: 2025-10-09 10:04:04.617 2 INFO nova.compute.manager [None req-64d42f4c-0be6-4a6b-845d-545a6eabdd53 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Get console output
Oct 09 10:04:04 compute-1 nova_compute[162974]: 2025-10-09 10:04:04.619 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 09 10:04:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.002 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.003 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.173 2 DEBUG nova.compute.manager [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-changed-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.173 2 DEBUG nova.compute.manager [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Refreshing instance network info cache due to event network-changed-a450260b-c4da-4f56-bf08-713a5ccc3d0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.173 2 DEBUG oslo_concurrency.lockutils [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.173 2 DEBUG oslo_concurrency.lockutils [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.174 2 DEBUG nova.network.neutron [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Refreshing network info cache for port a450260b-c4da-4f56-bf08-713a5ccc3d0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.216 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.216 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.216 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.217 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.217 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.218 2 INFO nova.compute.manager [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Terminating instance
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.219 2 DEBUG nova.compute.manager [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 09 10:04:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:05.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:05 compute-1 kernel: tapa450260b-c4 (unregistering): left promiscuous mode
Oct 09 10:04:05 compute-1 NetworkManager[982]: <info>  [1760004245.2574] device (tapa450260b-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00108|binding|INFO|Releasing lport a450260b-c4da-4f56-bf08-713a5ccc3d0e from this chassis (sb_readonly=0)
Oct 09 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00109|binding|INFO|Setting lport a450260b-c4da-4f56-bf08-713a5ccc3d0e down in Southbound
Oct 09 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00110|binding|INFO|Removing iface tapa450260b-c4 ovn-installed in OVS
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.270 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:02:fe 10.100.0.4'], port_security=['fa:16:3e:ac:02:fe 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29f00e1c-dcdd-4a28-b141-a900eb34b836', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a733533a-76c7-46e6-89e3-803597fe93b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '68293693-d770-49bf-b0b3-d26af71ce606', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0fc8e9a-af61-4397-9551-67e71824e91c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=a450260b-c4da-4f56-bf08-713a5ccc3d0e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.271 71059 INFO neutron.agent.ovn.metadata.agent [-] Port a450260b-c4da-4f56-bf08-713a5ccc3d0e in datapath a733533a-76c7-46e6-89e3-803597fe93b6 unbound from our chassis
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.272 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a733533a-76c7-46e6-89e3-803597fe93b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.273 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[0458a028-735b-424a-80bd-6eaa46eba675]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.274 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6 namespace which is not needed anymore
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:05 compute-1 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 09 10:04:05 compute-1 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000d.scope: Consumed 10.866s CPU time.
Oct 09 10:04:05 compute-1 systemd-machined[120683]: Machine qemu-7-instance-0000000d terminated.
Oct 09 10:04:05 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [NOTICE]   (172784) : haproxy version is 2.8.14-c23fe91
Oct 09 10:04:05 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [NOTICE]   (172784) : path to executable is /usr/sbin/haproxy
Oct 09 10:04:05 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [WARNING]  (172784) : Exiting Master process...
Oct 09 10:04:05 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [WARNING]  (172784) : Exiting Master process...
Oct 09 10:04:05 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [ALERT]    (172784) : Current worker (172786) exited with code 143 (Terminated)
Oct 09 10:04:05 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [WARNING]  (172784) : All workers exited. Exiting... (0)
Oct 09 10:04:05 compute-1 systemd[1]: libpod-440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0.scope: Deactivated successfully.
Oct 09 10:04:05 compute-1 podman[172935]: 2025-10-09 10:04:05.382092196 +0000 UTC m=+0.033962658 container died 440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:04:05 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0-userdata-shm.mount: Deactivated successfully.
Oct 09 10:04:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-e5124fff80fb25973ea1610265ddf668756c8cb9c257f51808250cf95045e143-merged.mount: Deactivated successfully.
Oct 09 10:04:05 compute-1 podman[172935]: 2025-10-09 10:04:05.410651838 +0000 UTC m=+0.062522299 container cleanup 440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:04:05 compute-1 systemd[1]: libpod-conmon-440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0.scope: Deactivated successfully.
Oct 09 10:04:05 compute-1 kernel: tapa450260b-c4: entered promiscuous mode
Oct 09 10:04:05 compute-1 NetworkManager[982]: <info>  [1760004245.4311] manager: (tapa450260b-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00111|binding|INFO|Claiming lport a450260b-c4da-4f56-bf08-713a5ccc3d0e for this chassis.
Oct 09 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00112|binding|INFO|a450260b-c4da-4f56-bf08-713a5ccc3d0e: Claiming fa:16:3e:ac:02:fe 10.100.0.4
Oct 09 10:04:05 compute-1 kernel: tapa450260b-c4 (unregistering): left promiscuous mode
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.439 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:02:fe 10.100.0.4'], port_security=['fa:16:3e:ac:02:fe 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29f00e1c-dcdd-4a28-b141-a900eb34b836', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a733533a-76c7-46e6-89e3-803597fe93b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '68293693-d770-49bf-b0b3-d26af71ce606', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0fc8e9a-af61-4397-9551-67e71824e91c, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=a450260b-c4da-4f56-bf08-713a5ccc3d0e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.451 2 INFO nova.virt.libvirt.driver [-] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Instance destroyed successfully.
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.451 2 DEBUG nova.objects.instance [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid 29f00e1c-dcdd-4a28-b141-a900eb34b836 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00113|binding|INFO|Setting lport a450260b-c4da-4f56-bf08-713a5ccc3d0e ovn-installed in OVS
Oct 09 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00114|binding|INFO|Setting lport a450260b-c4da-4f56-bf08-713a5ccc3d0e up in Southbound
Oct 09 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00115|binding|INFO|Releasing lport a450260b-c4da-4f56-bf08-713a5ccc3d0e from this chassis (sb_readonly=1)
Oct 09 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00116|if_status|INFO|Not setting lport a450260b-c4da-4f56-bf08-713a5ccc3d0e down as sb is readonly
Oct 09 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00117|binding|INFO|Removing iface tapa450260b-c4 ovn-installed in OVS
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00118|binding|INFO|Releasing lport a450260b-c4da-4f56-bf08-713a5ccc3d0e from this chassis (sb_readonly=0)
Oct 09 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00119|binding|INFO|Setting lport a450260b-c4da-4f56-bf08-713a5ccc3d0e down in Southbound
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.466 2 DEBUG nova.virt.libvirt.vif [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T10:03:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-697662314',display_name='tempest-TestNetworkBasicOps-server-697662314',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-697662314',id=13,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtKLJ6IG9u4a8nuHneFynw1vBGpmAOOthC0luN75md/pSNPLJ1OiBs1QaWTfRgLBRYBcOf7wBzJd4+LCaHfI9OClhJh7S3mGctEWrkgZF/O/aOkt4rBN7LklD620tBk2Q==',key_name='tempest-TestNetworkBasicOps-110254677',keypairs=<?>,launch_index=0,launched_at=2025-10-09T10:03:44Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-4alywc2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T10:03:44Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=29f00e1c-dcdd-4a28-b141-a900eb34b836,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.466 2 DEBUG nova.network.os_vif_util [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.467 2 DEBUG nova.network.os_vif_util [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:02:fe,bridge_name='br-int',has_traffic_filtering=True,id=a450260b-c4da-4f56-bf08-713a5ccc3d0e,network=Network(a733533a-76c7-46e6-89e3-803597fe93b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa450260b-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.467 2 DEBUG os_vif [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:02:fe,bridge_name='br-int',has_traffic_filtering=True,id=a450260b-c4da-4f56-bf08-713a5ccc3d0e,network=Network(a733533a-76c7-46e6-89e3-803597fe93b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa450260b-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.470 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa450260b-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.470 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:02:fe 10.100.0.4'], port_security=['fa:16:3e:ac:02:fe 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29f00e1c-dcdd-4a28-b141-a900eb34b836', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a733533a-76c7-46e6-89e3-803597fe93b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '68293693-d770-49bf-b0b3-d26af71ce606', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0fc8e9a-af61-4397-9551-67e71824e91c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=a450260b-c4da-4f56-bf08-713a5ccc3d0e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 10:04:05 compute-1 podman[172960]: 2025-10-09 10:04:05.473445195 +0000 UTC m=+0.047782140 container remove 440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.475 2 INFO os_vif [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:02:fe,bridge_name='br-int',has_traffic_filtering=True,id=a450260b-c4da-4f56-bf08-713a5ccc3d0e,network=Network(a733533a-76c7-46e6-89e3-803597fe93b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa450260b-c4')
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.482 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[82ed49b1-d13d-4012-becc-71a21eff88aa]: (4, ('Thu Oct  9 10:04:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6 (440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0)\n440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0\nThu Oct  9 10:04:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6 (440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0)\n440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.483 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6dec85df-2eeb-413c-8baf-e93272c8978c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.484 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa733533a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:04:05 compute-1 kernel: tapa733533a-70: left promiscuous mode
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.502 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[de9fa4d1-09f5-44b7-a93d-13b48ef9f74a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.517 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[e502466b-c8c8-410b-ac93-d2199c15c99c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.519 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[9e835a0c-9e76-4a80-a268-466216f59254]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.532 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[f689ef73-c3e8-43c6-a22e-3da92bc68e4e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 190765, 'reachable_time': 21811, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 172993, 'error': None, 'target': 'ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.534 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.534 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[57191f81-4250-4832-b432-f6907a0887a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:04:05 compute-1 systemd[1]: run-netns-ovnmeta\x2da733533a\x2d76c7\x2d46e6\x2d89e3\x2d803597fe93b6.mount: Deactivated successfully.
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.535 71059 INFO neutron.agent.ovn.metadata.agent [-] Port a450260b-c4da-4f56-bf08-713a5ccc3d0e in datapath a733533a-76c7-46e6-89e3-803597fe93b6 unbound from our chassis
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.536 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a733533a-76c7-46e6-89e3-803597fe93b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.536 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[a4d9192a-1600-471e-a27b-5643cb1d9600]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.537 71059 INFO neutron.agent.ovn.metadata.agent [-] Port a450260b-c4da-4f56-bf08-713a5ccc3d0e in datapath a733533a-76c7-46e6-89e3-803597fe93b6 unbound from our chassis
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.538 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a733533a-76c7-46e6-89e3-803597fe93b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 09 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.538 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[d907335e-c4f7-4ede-86b0-025c627aa913]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.646 2 INFO nova.virt.libvirt.driver [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Deleting instance files /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836_del
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.647 2 INFO nova.virt.libvirt.driver [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Deletion of /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836_del complete
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.688 2 INFO nova.compute.manager [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Took 0.47 seconds to destroy the instance on the hypervisor.
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.688 2 DEBUG oslo.service.loopingcall [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.689 2 DEBUG nova.compute.manager [-] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 09 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.689 2 DEBUG nova.network.neutron [-] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 09 10:04:05 compute-1 ceph-mon[9795]: pgmap v931: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 09 10:04:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:06 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:06.005 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:04:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:06.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.504 2 DEBUG nova.network.neutron [-] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.515 2 INFO nova.compute.manager [-] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Took 0.83 seconds to deallocate network for instance.
Oct 09 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.546 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.547 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.584 2 DEBUG nova.compute.manager [req-ea3375da-de79-4837-bcc6-19319315c01d req-c967bb49-92a5-47b0-93a4-b0574e425cfe b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-vif-deleted-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.598 2 DEBUG oslo_concurrency.processutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:04:06 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:04:06 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2428919325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.945 2 DEBUG oslo_concurrency.processutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.949 2 DEBUG nova.compute.provider_tree [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.964 2 DEBUG nova.scheduler.client.report [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.982 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.998 2 INFO nova.scheduler.client.report [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance 29f00e1c-dcdd-4a28-b141-a900eb34b836
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.040 2 DEBUG nova.network.neutron [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updated VIF entry in instance network info cache for port a450260b-c4da-4f56-bf08-713a5ccc3d0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.041 2 DEBUG nova.network.neutron [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updating instance_info_cache with network_info: [{"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.051 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.058 2 DEBUG oslo_concurrency.lockutils [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.250 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-vif-unplugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.250 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.251 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.251 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:04:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.251 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] No waiting events found dispatching network-vif-unplugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.251 2 WARNING nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received unexpected event network-vif-unplugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e for instance with vm_state deleted and task_state None.
Oct 09 10:04:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:07.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.252 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.252 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.252 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.253 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.253 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] No waiting events found dispatching network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.253 2 WARNING nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received unexpected event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e for instance with vm_state deleted and task_state None.
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.253 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.254 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.254 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.254 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.254 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] No waiting events found dispatching network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.255 2 WARNING nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received unexpected event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e for instance with vm_state deleted and task_state None.
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.255 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.255 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.256 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.256 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.256 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] No waiting events found dispatching network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.256 2 WARNING nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received unexpected event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e for instance with vm_state deleted and task_state None.
Oct 09 10:04:07 compute-1 podman[173020]: 2025-10-09 10:04:07.559327691 +0000 UTC m=+0.056856636 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3)
Oct 09 10:04:07 compute-1 podman[173019]: 2025-10-09 10:04:07.576381291 +0000 UTC m=+0.081970091 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 09 10:04:07 compute-1 ceph-mon[9795]: pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 09 10:04:07 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2428919325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:08.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:09.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:09 compute-1 nova_compute[162974]: 2025-10-09 10:04:09.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:09 compute-1 nova_compute[162974]: 2025-10-09 10:04:09.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:09 compute-1 ceph-mon[9795]: pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 09 10:04:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:10.042 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:04:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:10.043 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:04:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:10.043 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:04:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:10.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:10 compute-1 nova_compute[162974]: 2025-10-09 10:04:10.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:11.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:11 compute-1 ceph-mon[9795]: pgmap v934: 337 pgs: 337 active+clean; 41 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 09 10:04:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:04:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:12.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:04:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/550343667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:04:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/550343667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:04:12 compute-1 nova_compute[162974]: 2025-10-09 10:04:12.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:13 compute-1 sudo[173055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:04:13 compute-1 sudo[173055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:04:13 compute-1 sudo[173055]: pam_unix(sudo:session): session closed for user root
Oct 09 10:04:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:13.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:13 compute-1 ceph-mon[9795]: pgmap v935: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 09 10:04:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:14.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:15.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:15 compute-1 nova_compute[162974]: 2025-10-09 10:04:15.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:15 compute-1 ceph-mon[9795]: pgmap v936: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 09 10:04:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:16.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:17.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:17 compute-1 ceph-mon[9795]: pgmap v937: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 09 10:04:17 compute-1 nova_compute[162974]: 2025-10-09 10:04:17.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:18.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:19.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:19 compute-1 podman[173084]: 2025-10-09 10:04:19.570181094 +0000 UTC m=+0.080980526 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:04:19 compute-1 ceph-mon[9795]: pgmap v938: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:04:20 compute-1 nova_compute[162974]: 2025-10-09 10:04:20.450 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760004245.4491017, 29f00e1c-dcdd-4a28-b141-a900eb34b836 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 10:04:20 compute-1 nova_compute[162974]: 2025-10-09 10:04:20.451 2 INFO nova.compute.manager [-] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] VM Stopped (Lifecycle Event)
Oct 09 10:04:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:20.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:20 compute-1 nova_compute[162974]: 2025-10-09 10:04:20.463 2 DEBUG nova.compute.manager [None req-1cc77880-e6ac-40e5-87e8-2e77630f4495 - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:04:20 compute-1 nova_compute[162974]: 2025-10-09 10:04:20.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:21.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:21 compute-1 ceph-mon[9795]: pgmap v939: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:22.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:22 compute-1 nova_compute[162974]: 2025-10-09 10:04:22.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:23.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:23 compute-1 ceph-mon[9795]: pgmap v940: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:04:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:24.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:25.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:25 compute-1 nova_compute[162974]: 2025-10-09 10:04:25.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:25 compute-1 ceph-mon[9795]: pgmap v941: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:26.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:27.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:27 compute-1 ceph-mon[9795]: pgmap v942: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:04:27 compute-1 nova_compute[162974]: 2025-10-09 10:04:27.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:28.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:29.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:29 compute-1 podman[173112]: 2025-10-09 10:04:29.525157179 +0000 UTC m=+0.038114074 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 09 10:04:29 compute-1 ceph-mon[9795]: pgmap v943: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:30.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:30 compute-1 nova_compute[162974]: 2025-10-09 10:04:30.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:31.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:31 compute-1 ceph-mon[9795]: pgmap v944: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:32.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:32 compute-1 nova_compute[162974]: 2025-10-09 10:04:32.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:33 compute-1 sudo[173130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:04:33 compute-1 sudo[173130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:04:33 compute-1 sudo[173130]: pam_unix(sudo:session): session closed for user root
Oct 09 10:04:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:33.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:33 compute-1 ceph-mon[9795]: pgmap v945: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:04:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:34.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:04:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:35.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:35 compute-1 nova_compute[162974]: 2025-10-09 10:04:35.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:35 compute-1 ceph-mon[9795]: pgmap v946: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.845369) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275845392, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2350, "num_deletes": 251, "total_data_size": 6083901, "memory_usage": 6164256, "flush_reason": "Manual Compaction"}
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275854515, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3946077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25943, "largest_seqno": 28288, "table_properties": {"data_size": 3936844, "index_size": 5727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19559, "raw_average_key_size": 20, "raw_value_size": 3918125, "raw_average_value_size": 4051, "num_data_blocks": 252, "num_entries": 967, "num_filter_entries": 967, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004069, "oldest_key_time": 1760004069, "file_creation_time": 1760004275, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 9168 microseconds, and 6029 cpu microseconds.
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854538) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3946077 bytes OK
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854548) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854832) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854842) EVENT_LOG_v1 {"time_micros": 1760004275854839, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854852) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 6073476, prev total WAL file size 6073476, number of live WAL files 2.
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.855755) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3853KB)], [51(11MB)]
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275855778, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 16286126, "oldest_snapshot_seqno": -1}
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5801 keys, 14127090 bytes, temperature: kUnknown
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275891013, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 14127090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14087878, "index_size": 23614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 147447, "raw_average_key_size": 25, "raw_value_size": 13982440, "raw_average_value_size": 2410, "num_data_blocks": 962, "num_entries": 5801, "num_filter_entries": 5801, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760004275, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.891301) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14127090 bytes
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.894350) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 460.1 rd, 399.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.8 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6317, records dropped: 516 output_compression: NoCompression
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.894364) EVENT_LOG_v1 {"time_micros": 1760004275894357, "job": 30, "event": "compaction_finished", "compaction_time_micros": 35400, "compaction_time_cpu_micros": 19942, "output_level": 6, "num_output_files": 1, "total_output_size": 14127090, "num_input_records": 6317, "num_output_records": 5801, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275895361, "job": 30, "event": "table_file_deletion", "file_number": 53}
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275897415, "job": 30, "event": "table_file_deletion", "file_number": 51}
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.855668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.897508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.897510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.897512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.897513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.897514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:04:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:36.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:37.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:37 compute-1 nova_compute[162974]: 2025-10-09 10:04:37.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:37 compute-1 ceph-mon[9795]: pgmap v947: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:04:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:38.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:38 compute-1 podman[173158]: 2025-10-09 10:04:38.529335148 +0000 UTC m=+0.039967088 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 09 10:04:38 compute-1 podman[173159]: 2025-10-09 10:04:38.53927653 +0000 UTC m=+0.046357755 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Oct 09 10:04:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:39.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:39 compute-1 ceph-mon[9795]: pgmap v948: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:40 compute-1 ovn_controller[62080]: 2025-10-09T10:04:40Z|00120|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Oct 09 10:04:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:04:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:40.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:04:40 compute-1 nova_compute[162974]: 2025-10-09 10:04:40.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:41.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:41 compute-1 ceph-mon[9795]: pgmap v949: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:42.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:42 compute-1 sudo[173193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:04:42 compute-1 sudo[173193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:04:42 compute-1 sudo[173193]: pam_unix(sudo:session): session closed for user root
Oct 09 10:04:42 compute-1 sudo[173218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:04:42 compute-1 sudo[173218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:04:42 compute-1 nova_compute[162974]: 2025-10-09 10:04:42.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:43 compute-1 sudo[173218]: pam_unix(sudo:session): session closed for user root
Oct 09 10:04:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:43.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:43 compute-1 ceph-mon[9795]: pgmap v950: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:04:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:04:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:04:43 compute-1 ceph-mon[9795]: pgmap v951: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:04:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:04:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:04:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:04:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:04:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:04:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:44.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:45.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:45 compute-1 nova_compute[162974]: 2025-10-09 10:04:45.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:46 compute-1 ceph-mon[9795]: pgmap v952: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:04:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:46.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:46 compute-1 sudo[173274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:04:46 compute-1 sudo[173274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:04:46 compute-1 sudo[173274]: pam_unix(sudo:session): session closed for user root
Oct 09 10:04:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:47.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:04:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:04:47 compute-1 ceph-mon[9795]: pgmap v953: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:04:47 compute-1 nova_compute[162974]: 2025-10-09 10:04:47.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:04:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:48.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:04:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:49.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:50 compute-1 ceph-mon[9795]: pgmap v954: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:04:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:04:50 compute-1 nova_compute[162974]: 2025-10-09 10:04:50.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:50.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:50 compute-1 podman[173301]: 2025-10-09 10:04:50.565058528 +0000 UTC m=+0.069027252 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 09 10:04:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:51.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:04:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 1800.0 total, 600.0 interval
                                          Cumulative writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                          Cumulative WAL: 12K writes, 3773 syncs, 3.36 writes per sync, written: 0.03 GB, 0.02 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 3402 writes, 12K keys, 3402 commit groups, 1.0 writes per commit group, ingest: 13.97 MB, 0.02 MB/s
                                          Interval WAL: 3402 writes, 1492 syncs, 2.28 writes per sync, written: 0.01 GB, 0.02 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 09 10:04:52 compute-1 ceph-mon[9795]: pgmap v955: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:04:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:52.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:52 compute-1 nova_compute[162974]: 2025-10-09 10:04:52.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:53 compute-1 sudo[173325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:04:53 compute-1 sudo[173325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:04:53 compute-1 sudo[173325]: pam_unix(sudo:session): session closed for user root
Oct 09 10:04:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:53.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:54 compute-1 sshd-session[173351]: Accepted publickey for zuul from 192.168.122.10 port 43312 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 10:04:54 compute-1 systemd[1]: Created slice User Slice of UID 1000.
Oct 09 10:04:54 compute-1 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 09 10:04:54 compute-1 systemd-logind[798]: New session 40 of user zuul.
Oct 09 10:04:54 compute-1 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 09 10:04:54 compute-1 systemd[1]: Starting User Manager for UID 1000...
Oct 09 10:04:54 compute-1 systemd[173355]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:04:54 compute-1 systemd[173355]: Queued start job for default target Main User Target.
Oct 09 10:04:54 compute-1 ceph-mon[9795]: pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:04:54 compute-1 systemd[173355]: Created slice User Application Slice.
Oct 09 10:04:54 compute-1 systemd[173355]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 10:04:54 compute-1 systemd[173355]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 10:04:54 compute-1 systemd[173355]: Reached target Paths.
Oct 09 10:04:54 compute-1 systemd[173355]: Reached target Timers.
Oct 09 10:04:54 compute-1 systemd[173355]: Starting D-Bus User Message Bus Socket...
Oct 09 10:04:54 compute-1 systemd[173355]: Starting Create User's Volatile Files and Directories...
Oct 09 10:04:54 compute-1 systemd[173355]: Finished Create User's Volatile Files and Directories.
Oct 09 10:04:54 compute-1 systemd[173355]: Listening on D-Bus User Message Bus Socket.
Oct 09 10:04:54 compute-1 systemd[173355]: Reached target Sockets.
Oct 09 10:04:54 compute-1 systemd[173355]: Reached target Basic System.
Oct 09 10:04:54 compute-1 systemd[173355]: Reached target Main User Target.
Oct 09 10:04:54 compute-1 systemd[173355]: Startup finished in 106ms.
Oct 09 10:04:54 compute-1 systemd[1]: Started User Manager for UID 1000.
Oct 09 10:04:54 compute-1 systemd[1]: Started Session 40 of User zuul.
Oct 09 10:04:54 compute-1 sshd-session[173351]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:04:54 compute-1 sudo[173371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 09 10:04:54 compute-1 sudo[173371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:04:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:54.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:55.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:55 compute-1 nova_compute[162974]: 2025-10-09 10:04:55.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.134 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.134 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.150 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.150 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.150 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.150 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.151 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:04:56 compute-1 ceph-mon[9795]: pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:04:56 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4131648813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.502 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:04:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:56.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.730 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.731 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4963MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.731 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.732 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.806 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.806 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.828 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:04:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:04:57 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/204166457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:57 compute-1 nova_compute[162974]: 2025-10-09 10:04:57.166 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:04:57 compute-1 nova_compute[162974]: 2025-10-09 10:04:57.170 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:04:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 09 10:04:57 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/33619624' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:04:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:57.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:57 compute-1 ceph-mon[9795]: from='client.26359 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:04:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/4131648813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/336862096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/968507340' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:04:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/204166457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/33619624' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:04:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3305344509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/551610347' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:04:57 compute-1 nova_compute[162974]: 2025-10-09 10:04:57.364 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:04:57 compute-1 nova_compute[162974]: 2025-10-09 10:04:57.384 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:04:57 compute-1 nova_compute[162974]: 2025-10-09 10:04:57.385 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:04:57 compute-1 nova_compute[162974]: 2025-10-09 10:04:57.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:58 compute-1 ceph-mon[9795]: from='client.26645 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:04:58 compute-1 ceph-mon[9795]: from='client.16800 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:04:58 compute-1 ceph-mon[9795]: from='client.26386 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:04:58 compute-1 ceph-mon[9795]: from='client.26663 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:04:58 compute-1 ceph-mon[9795]: from='client.16827 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:04:58 compute-1 ceph-mon[9795]: pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.365 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.365 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.365 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.365 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.376 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.376 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.376 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.377 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:04:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:04:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:58.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:04:59 compute-1 nova_compute[162974]: 2025-10-09 10:04:59.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:59 compute-1 nova_compute[162974]: 2025-10-09 10:04:59.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:04:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:59.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:59 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3527318548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:59 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3280518930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:59 compute-1 ovs-vsctl[173704]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 09 10:05:00 compute-1 virtqemud[162526]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 09 10:05:00 compute-1 virtqemud[162526]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 09 10:05:00 compute-1 virtqemud[162526]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 09 10:05:00 compute-1 ceph-mon[9795]: pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:00 compute-1 podman[173880]: 2025-10-09 10:05:00.418233576 +0000 UTC m=+0.058208856 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 09 10:05:00 compute-1 nova_compute[162974]: 2025-10-09 10:05:00.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:00.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:00 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: cache status {prefix=cache status} (starting...)
Oct 09 10:05:00 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:05:00 compute-1 lvm[174020]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 10:05:00 compute-1 lvm[174020]: VG ceph_vg0 finished
Oct 09 10:05:00 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: client ls {prefix=client ls} (starting...)
Oct 09 10:05:00 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:05:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:00 compute-1 kernel: block vda: the capability attribute has been deprecated.
Oct 09 10:05:01 compute-1 nova_compute[162974]: 2025-10-09 10:05:01.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:01 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 10:05:01 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: damage ls {prefix=damage ls} (starting...)
Oct 09 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:05:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:01.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:01 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2670855265' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump loads {prefix=dump loads} (starting...)
Oct 09 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 09 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 09 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:05:01 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 09 10:05:01 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1761942395' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 09 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:05:01 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 09 10:05:01 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3126869266' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 09 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:05:01 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 09 10:05:01 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3069884087' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 09 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 09 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:05:02 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 09 10:05:02 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1937344101' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mon[9795]: from='client.26440 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mon[9795]: from='client.26705 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mon[9795]: pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:02 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3065331311' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mon[9795]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1435948674' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1761942395' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3912654270' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3126869266' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mon[9795]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3069884087' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1937344101' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/900548494' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: ops {prefix=ops} (starting...)
Oct 09 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:05:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:02.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:02 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 09 10:05:02 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/108986200' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:05:02 compute-1 nova_compute[162974]: 2025-10-09 10:05:02.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:02 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 09 10:05:02 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2685102981' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 09 10:05:02 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3857343826' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: session ls {prefix=session ls} (starting...)
Oct 09 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:05:03 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: status {prefix=status} (starting...)
Oct 09 10:05:03 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 09 10:05:03 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1643024054' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:03.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.26461 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.26717 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.26479 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.26744 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.16899 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.26503 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.26765 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.26509 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3249911151' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2389407953' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/108986200' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3242075144' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2685102981' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3857343826' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3551404967' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1391361791' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1337245173' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1643024054' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/845671599' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 09 10:05:03 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/406982026' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 09 10:05:03 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2541611345' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:03 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct 09 10:05:03 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1522824198' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 09 10:05:04 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3984711169' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 09 10:05:04 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4057339861' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 09 10:05:04 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/668757568' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.26801 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.26548 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.26810 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.16971 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.16974 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.26831 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.17007 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/406982026' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2101833220' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2635017389' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2541611345' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/802223364' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1522824198' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1062635565' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1096112260' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3081418931' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1240712361' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3984711169' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/4057339861' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/668757568' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 09 10:05:04 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/64288757' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:05:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:04.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 09 10:05:04 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3256037916' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 09 10:05:04 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2889519608' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:05:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 09 10:05:04 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3026430711' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 09 10:05:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2521577722' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 09 10:05:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002817725' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:05:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:05.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.17034 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.26644 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.26909 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/461285032' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/64288757' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3331225401' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3256037916' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3984257628' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2889519608' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/604418692' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3026430711' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3772235470' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3683240453' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2521577722' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4002817725' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:05:05 compute-1 nova_compute[162974]: 2025-10-09 10:05:05.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 09 10:05:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1164563443' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 09 10:05:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62328478' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 09 10:05:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2194491156' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 09 10:05:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670162803' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 09 10:05:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1284909676' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:34.941570+0000 osd.0 (osd.0) 114 : cluster [DBG] 6.d scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:34.959415+0000 osd.0 (osd.0) 115 : cluster [DBG] 6.d scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 81 heartbeat osd_stat(store_statfs(0x4fcaaa000/0x0/0x4ffc00000, data 0xf7546/0x16f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 81 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=0 lpr=81 pi=[60,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.829947 2 0.000040
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=0 lpr=81 pi=[60,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.830111 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=0 lpr=81 pi=[60,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.830134 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=0 lpr=81 pi=[60,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000052 1 0.000089
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.830132 2 0.000026
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.830229 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.830467 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000122 1 0.000368
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000112 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.985656 2 0.000044
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.985770 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.985784 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000035 1 0.000060
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 82 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.985156 2 0.000150
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.985395 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.985542 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000051 1 0.000072
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000080 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.990245 7 0.000054
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.010673 2 0.000068
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.010722 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000062 1 0.000090
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] lb MIN local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 DELETING pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.008442 2 0.000146
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] lb MIN local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.008557 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] lb MIN local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started 1.009599 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 5726208 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 115)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:34.941570+0000 osd.0 (osd.0) 114 : cluster [DBG] 6.d scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:34.959415+0000 osd.0 (osd.0) 115 : cluster [DBG] 6.d scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:06.640772+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:35.948999+0000 osd.0 (osd.0) 116 : cluster [DBG] 10.7 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:35.977314+0000 osd.0 (osd.0) 117 : cluster [DBG] 10.7 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.003266 6 0.000283
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.002763 6 0.000108
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] exit Started/Stray 1.004300 6 0.000227
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.005224 6 0.000027
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 lc 40'220 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002210 3 0.000223
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 lc 40'220 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 lc 40'220 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000163 1 0.000054
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 lc 40'220 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002160 3 0.000052
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.063973 1 0.000041
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.063721 1 0.000027
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 lc 40'312 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.066622 3 0.000123
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 lc 40'312 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 4571136 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 722256 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.014849 1 0.000028
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 lc 40'403 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.080712 3 0.000109
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 lc 40'403 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 lc 40'312 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.014776 1 0.000227
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 lc 40'312 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028655 1 0.000152
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 lc 40'403 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.028748 1 0.000025
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 lc 40'403 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028923 1 0.000029
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 117)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:35.948999+0000 osd.0 (osd.0) 116 : cluster [DBG] 10.7 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:35.977314+0000 osd.0 (osd.0) 117 : cluster [DBG] 10.7 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:07.640948+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:36.987857+0000 osd.0 (osd.0) 118 : cluster [DBG] 6.8 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:36.998448+0000 osd.0 (osd.0) 119 : cluster [DBG] 6.8 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 83 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.877427 1 0.000019
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.015892 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.021150 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000064 1 0.000106
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000035
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.935508 1 0.000045
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.016315 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.020769 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.000051
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 1 0.000026
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.906811 1 0.000038
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.017051 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.020604 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000027 1 0.000045
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000039
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.950788 1 0.000090
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.017298 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.020200 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000060 1 0.000089
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000040
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001704 3 0.000020
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=24
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=24
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002143 3 0.000046
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001867 3 0.000021
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=52
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=52
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001607 3 0.000040
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 4571136 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 119)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:36.987857+0000 osd.0 (osd.0) 118 : cluster [DBG] 6.8 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:36.998448+0000 osd.0 (osd.0) 119 : cluster [DBG] 6.8 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:08.641105+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:38.015419+0000 osd.0 (osd.0) 120 : cluster [DBG] 9.10 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:38.025980+0000 osd.0 (osd.0) 121 : cluster [DBG] 9.10 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 85 handle_osd_map epochs [85,85], i have 85, src has [1,85]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003913 2 0.000039
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005856 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004278 2 0.000049
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006036 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004361 2 0.000044
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006045 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004600 2 0.000040
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006809 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000955 3 0.000111
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000110 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001211 3 0.000095
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/61 les/c/f=85/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001815 4 0.000094
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/61 les/c/f=85/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/61 les/c/f=85/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/61 les/c/f=85/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/60 les/c/f=85/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001311 4 0.000180
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/60 les/c/f=85/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/60 les/c/f=85/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/60 les/c/f=85/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79290368 unmapped: 4554752 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 121)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:38.015419+0000 osd.0 (osd.0) 120 : cluster [DBG] 9.10 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:38.025980+0000 osd.0 (osd.0) 121 : cluster [DBG] 9.10 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:09.641299+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:39.015740+0000 osd.0 (osd.0) 122 : cluster [DBG] 11.12 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:39.026363+0000 osd.0 (osd.0) 123 : cluster [DBG] 11.12 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 85 handle_osd_map epochs [85,85], i have 85, src has [1,85]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 4538368 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 123)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:39.015740+0000 osd.0 (osd.0) 122 : cluster [DBG] 11.12 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:39.026363+0000 osd.0 (osd.0) 123 : cluster [DBG] 11.12 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 85 heartbeat osd_stat(store_statfs(0x4fca98000/0x0/0x4ffc00000, data 0x1019a5/0x182000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:10.641459+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:40.036061+0000 osd.0 (osd.0) 124 : cluster [DBG] 5.1f scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:40.046647+0000 osd.0 (osd.0) 125 : cluster [DBG] 5.1f scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.634215355s of 10.720481873s, submitted: 146
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 4521984 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 125)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:40.036061+0000 osd.0 (osd.0) 124 : cluster [DBG] 5.1f scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:40.046647+0000 osd.0 (osd.0) 125 : cluster [DBG] 5.1f scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:11.641623+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:41.072064+0000 osd.0 (osd.0) 126 : cluster [DBG] 9.15 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:41.082635+0000 osd.0 (osd.0) 127 : cluster [DBG] 9.15 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 4513792 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 744623 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 127)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:41.072064+0000 osd.0 (osd.0) 126 : cluster [DBG] 9.15 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:41.082635+0000 osd.0 (osd.0) 127 : cluster [DBG] 9.15 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:12.641739+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:42.033207+0000 osd.0 (osd.0) 128 : cluster [DBG] 5.1b scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:42.043671+0000 osd.0 (osd.0) 129 : cluster [DBG] 5.1b scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 4497408 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 129)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:42.033207+0000 osd.0 (osd.0) 128 : cluster [DBG] 5.1b scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:42.043671+0000 osd.0 (osd.0) 129 : cluster [DBG] 5.1b scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:13.641878+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:43.005433+0000 osd.0 (osd.0) 130 : cluster [DBG] 8.14 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:43.016005+0000 osd.0 (osd.0) 131 : cluster [DBG] 8.14 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 4489216 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c(unlocked)] enter Initial
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=0 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000046 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=0 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000021
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 131)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:43.005433+0000 osd.0 (osd.0) 130 : cluster [DBG] 8.14 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:43.016005+0000 osd.0 (osd.0) 131 : cluster [DBG] 8.14 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000163 1 0.000113
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000198 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c(unlocked)] enter Initial
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=0 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000076 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=0 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000031
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000116 1 0.000050
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000044 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000182 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:14.642031+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:43.992046+0000 osd.0 (osd.0) 132 : cluster [DBG] 5.18 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:44.002642+0000 osd.0 (osd.0) 133 : cluster [DBG] 5.18 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d(unlocked)] enter Initial
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=0 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000059 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=0 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000224 1 0.000027
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000179 1 0.000260
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000028 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000219 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d(unlocked)] enter Initial
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=0 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000081 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=0 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000032
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000078 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 4472832 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.446883 2 0.000050
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.447127 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.447148 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000055 1 0.000087
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.010575 2 0.000078
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.010785 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.010818 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 88 handle_osd_map epochs [88,88], i have 88, src has [1,88]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000185 1 0.000238
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.447413 2 0.000035
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.447606 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.447631 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000092 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000120 1 0.000246
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.012179 3 0.000044
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.012400 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.012418 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000164 1 0.000336
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 88 handle_osd_map epochs [88,88], i have 88, src has [1,88]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 133)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:43.992046+0000 osd.0 (osd.0) 132 : cluster [DBG] 5.18 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:44.002642+0000 osd.0 (osd.0) 133 : cluster [DBG] 5.18 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:15.642258+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:44.966417+0000 osd.0 (osd.0) 134 : cluster [DBG] 3.1c scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:44.977132+0000 osd.0 (osd.0) 135 : cluster [DBG] 3.1c scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 88 heartbeat osd_stat(store_statfs(0x4fca93000/0x0/0x4ffc00000, data 0x105eb3/0x188000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 4456448 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.000920 6 0.000041
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.002443 6 0.000229
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.004712 6 0.000059
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.003890 6 0.000034
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 lc 40'355 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002905 3 0.000107
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 lc 40'355 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 lc 40'355 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000026 1 0.000045
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 lc 40'355 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 lc 40'443 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002774 3 0.000063
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 lc 40'443 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 135)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:44.966417+0000 osd.0 (osd.0) 134 : cluster [DBG] 3.1c scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:44.977132+0000 osd.0 (osd.0) 135 : cluster [DBG] 3.1c scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049967 1 0.000018
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 lc 40'221 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.052357 3 0.000054
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 lc 40'221 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 lc 40'443 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.048178 1 0.000039
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 lc 40'443 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035778 1 0.000073
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 lc 40'221 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.035942 1 0.000018
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 lc 40'278 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.086813 3 0.000096
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 lc 40'278 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 lc 40'221 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.036259 1 0.000142
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 lc 40'278 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.036350 1 0.000175
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 lc 40'278 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:16.642408+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:46.007645+0000 osd.0 (osd.0) 136 : cluster [DBG] 5.15 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:46.018393+0000 osd.0 (osd.0) 137 : cluster [DBG] 5.15 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.056873 1 0.000068
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 89 heartbeat osd_stat(store_statfs(0x4fca8f000/0x0/0x4ffc00000, data 0x107f1d/0x18b000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 5373952 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807366 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.923776 1 0.000040
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.010630 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.015371 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000055 1 0.000088
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.887557 1 0.000060
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.012293 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.014908 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000100 1 0.000165
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000039 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.830846 1 0.000045
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.011018 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.014946 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000060 1 0.000106
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.960638 1 0.000055
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.013622 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.014566 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000031 1 0.000049
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001920 2 0.001333
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001867 2 0.000026
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001597 2 0.000021
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=34
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=34
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000651 2 0.000046
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=46
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=46
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000385 2 0.000088
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 137)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:46.007645+0000 osd.0 (osd.0) 136 : cluster [DBG] 5.15 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:46.018393+0000 osd.0 (osd.0) 137 : cluster [DBG] 5.15 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002741 2 0.000228
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=41
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=41
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001621 2 0.000057
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000975 2 0.000082
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:17.642533+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:47.015195+0000 osd.0 (osd.0) 138 : cluster [DBG] 11.1b deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:47.025514+0000 osd.0 (osd.0) 139 : cluster [DBG] 11.1b deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 90 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0x10a0e3/0x192000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 4325376 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 90 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 139)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:47.015195+0000 osd.0 (osd.0) 138 : cluster [DBG] 11.1b deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:47.025514+0000 osd.0 (osd.0) 139 : cluster [DBG] 11.1b deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003970 2 0.000087
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007258 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004129 2 0.000073
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007940 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006884 2 0.000131
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009570 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007430 2 0.000055
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009730 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=7 ec=53/34 lis/c=90/68 les/c/f=91/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002309 4 0.000136
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=7 ec=53/34 lis/c=90/68 les/c/f=91/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=7 ec=53/34 lis/c=90/68 les/c/f=91/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=7 ec=53/34 lis/c=90/68 les/c/f=91/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/67 les/c/f=91/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002230 4 0.000066
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/67 les/c/f=91/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/67 les/c/f=91/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000147 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/67 les/c/f=91/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=5 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001205 3 0.000120
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=5 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=5 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000068 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=5 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000989 3 0.000066
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:18.642715+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:48.009791+0000 osd.0 (osd.0) 140 : cluster [DBG] 8.19 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:48.020293+0000 osd.0 (osd.0) 141 : cluster [DBG] 8.19 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 91 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 91 heartbeat osd_stat(store_statfs(0x4fca81000/0x0/0x4ffc00000, data 0x10e083/0x198000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 4325376 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 141)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:48.009791+0000 osd.0 (osd.0) 140 : cluster [DBG] 8.19 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:48.020293+0000 osd.0 (osd.0) 141 : cluster [DBG] 8.19 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 91 heartbeat osd_stat(store_statfs(0x4fca81000/0x0/0x4ffc00000, data 0x10e083/0x198000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:19.642870+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:49.003908+0000 osd.0 (osd.0) 142 : cluster [DBG] 3.a scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:49.014471+0000 osd.0 (osd.0) 143 : cluster [DBG] 3.a scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 4276224 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 143)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:49.003908+0000 osd.0 (osd.0) 142 : cluster [DBG] 3.a scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:49.014471+0000 osd.0 (osd.0) 143 : cluster [DBG] 3.a scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:20.643029+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:50.050664+0000 osd.0 (osd.0) 144 : cluster [DBG] 4.d scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:50.061245+0000 osd.0 (osd.0) 145 : cluster [DBG] 4.d scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 4268032 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 145)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:50.050664+0000 osd.0 (osd.0) 144 : cluster [DBG] 4.d scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:50.061245+0000 osd.0 (osd.0) 145 : cluster [DBG] 4.d scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:21.643146+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:51.057765+0000 osd.0 (osd.0) 146 : cluster [DBG] 11.1c deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:51.068325+0000 osd.0 (osd.0) 147 : cluster [DBG] 11.1c deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.965123177s of 11.030270576s, submitted: 91
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 4268032 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817441 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 147)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:51.057765+0000 osd.0 (osd.0) 146 : cluster [DBG] 11.1c deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:51.068325+0000 osd.0 (osd.0) 147 : cluster [DBG] 11.1c deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 91 ms_handle_reset con 0x560c9a066000 session 0x560c9b906960
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 91 ms_handle_reset con 0x560c9c8a3000 session 0x560c9cb92b40
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:22.643362+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:52.102350+0000 osd.0 (osd.0) 148 : cluster [DBG] 11.5 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:52.112442+0000 osd.0 (osd.0) 149 : cluster [DBG] 11.5 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e(unlocked)] enter Initial
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=0 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000056 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=0 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000029
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000127 1 0.000053
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001114 2 0.000037
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 92 heartbeat osd_stat(store_statfs(0x4fca84000/0x0/0x4ffc00000, data 0x10e083/0x198000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 4210688 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 149)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:52.102350+0000 osd.0 (osd.0) 148 : cluster [DBG] 11.5 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:52.112442+0000 osd.0 (osd.0) 149 : cluster [DBG] 11.5 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 92 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.604680 2 0.000060
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.605975 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.000920 3 0.000133
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000059 1 0.000053
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007495 3 0.000054
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] enter Started/Primary/Active/Clean
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:23.643519+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:53.124186+0000 osd.0 (osd.0) 150 : cluster [DBG] 4.a deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:53.134732+0000 osd.0 (osd.0) 151 : cluster [DBG] 4.a deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 4177920 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 151)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:53.124186+0000 osd.0 (osd.0) 150 : cluster [DBG] 4.a deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:53.134732+0000 osd.0 (osd.0) 151 : cluster [DBG] 4.a deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=41'42 mlcod 41'42 active+clean] exit Started/Primary/Active/Clean 42.182703 99 0.000380
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started/Primary/Active 42.470143 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started/Primary 43.469295 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started 43.469476 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=41'42 mlcod 41'42 active mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532474518s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 41'42 active pruub 256.462554932s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532430649s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 256.462554932s@ mbc={}] exit Reset 0.000068 1 0.000110
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532430649s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 256.462554932s@ mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532430649s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 256.462554932s@ mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532430649s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 256.462554932s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532430649s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 256.462554932s@ mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532430649s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 256.462554932s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 24.176490 56 0.000205
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 24.177861 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 25.182837 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 25.182919 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823891640s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 active pruub 258.754333496s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823860168s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] exit Reset 0.000087 1 0.000142
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823860168s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823860168s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823860168s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823860168s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823860168s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 24.176838 56 0.000229
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 24.177740 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 25.182406 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 25.182635 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823626518s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 active pruub 258.754333496s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823608398s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] exit Reset 0.000034 1 0.000162
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823608398s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823608398s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823608398s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823608398s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823608398s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 94 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 94 handle_osd_map epochs [93,94], i have 94, src has [1,94]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:24.643648+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:54.163584+0000 osd.0 (osd.0) 152 : cluster [DBG] 11.1a scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:54.174132+0000 osd.0 (osd.0) 153 : cluster [DBG] 11.1a scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 94 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0x1122ac/0x19e000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 4161536 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 153)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:54.163584+0000 osd.0 (osd.0) 152 : cluster [DBG] 11.1a scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:54.174132+0000 osd.0 (osd.0) 153 : cluster [DBG] 11.1a scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 94 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012433 3 0.000024
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013088 6 0.000060
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.012624 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000101 1 0.000301
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000057 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000229
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013078 3 0.000034
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.013265 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000063 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000127 1 0.000314
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000026 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000122 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000394
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:25.643765+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:55.128030+0000 osd.0 (osd.0) 154 : cluster [DBG] 9.a scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:55.138570+0000 osd.0 (osd.0) 155 : cluster [DBG] 9.a scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.127902 3 0.000070
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.127937 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000044 1 0.000049
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] lb MIN local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 DELETING pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.024065 2 0.000119
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] lb MIN local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.024180 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] lb MIN local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started 1.165278 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.d deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.d deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 4030464 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 155)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:55.128030+0000 osd.0 (osd.0) 154 : cluster [DBG] 9.a scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:55.138570+0000 osd.0 (osd.0) 155 : cluster [DBG] 9.a scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 95 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003209 4 0.000309
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003660 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003036 4 0.000106
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003718 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.002752 5 0.001184
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000140 1 0.000035
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.002922 5 0.001264
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000546 1 0.000085
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035475 2 0.000068
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.035972 1 0.000061
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000411 1 0.000084
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052313 2 0.000074
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:26.643871+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:56.102483+0000 osd.0 (osd.0) 156 : cluster [DBG] 9.d deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:56.113102+0000 osd.0 (osd.0) 157 : cluster [DBG] 9.d deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 4005888 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835463 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 157)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:56.102483+0000 osd.0 (osd.0) 156 : cluster [DBG] 9.d deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:56.113102+0000 osd.0 (osd.0) 157 : cluster [DBG] 9.d deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.918601 1 0.000162
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.010735 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.014492 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.014731 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991911888s) [2] async=[2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 40'1059 active pruub 260.950622559s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991852760s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950622559s@ mbc={}] exit Reset 0.000221 1 0.000192
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991852760s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950622559s@ mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991852760s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950622559s@ mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991852760s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950622559s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.972112 1 0.000139
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991852760s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950622559s@ mbc={}] exit Start 0.000012 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.011702 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.015622 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.015817 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991852760s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950622559s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991250038s) [2] async=[2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 40'1059 active pruub 260.950653076s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991156578s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950653076s@ mbc={}] exit Reset 0.000146 1 0.000333
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991156578s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950653076s@ mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991156578s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950653076s@ mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991156578s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950653076s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991156578s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950653076s@ mbc={}] exit Start 0.000048 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991156578s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950653076s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:27.644019+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:57.084626+0000 osd.0 (osd.0) 158 : cluster [DBG] 5.1 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:57.095177+0000 osd.0 (osd.0) 159 : cluster [DBG] 5.1 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 3989504 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 159)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:57.084626+0000 osd.0 (osd.0) 158 : cluster [DBG] 5.1 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:57.095177+0000 osd.0 (osd.0) 159 : cluster [DBG] 5.1 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009114 7 0.000607
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000059 1 0.000099
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009441 7 0.000567
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000045 1 0.000065
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 DELETING pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.060747 2 0.000135
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.060862 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.070389 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:28.644138+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:58.057181+0000 osd.0 (osd.0) 160 : cluster [DBG] 3.5 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:58.067659+0000 osd.0 (osd.0) 161 : cluster [DBG] 3.5 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 DELETING pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.097205 2 0.000120
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.097316 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.106874 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 3948544 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 161)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:58.057181+0000 osd.0 (osd.0) 160 : cluster [DBG] 3.5 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:58.067659+0000 osd.0 (osd.0) 161 : cluster [DBG] 3.5 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:29.644301+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:59.070555+0000 osd.0 (osd.0) 162 : cluster [DBG] 3.d scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:38:59.081043+0000 osd.0 (osd.0) 163 : cluster [DBG] 3.d scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 98 heartbeat osd_stat(store_statfs(0x4fca72000/0x0/0x4ffc00000, data 0x11c141/0x1a9000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 3948544 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 98 heartbeat osd_stat(store_statfs(0x4fca72000/0x0/0x4ffc00000, data 0x11c141/0x1a9000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 163)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:59.070555+0000 osd.0 (osd.0) 162 : cluster [DBG] 3.d scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:38:59.081043+0000 osd.0 (osd.0) 163 : cluster [DBG] 3.d scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:30.644437+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:00.061437+0000 osd.0 (osd.0) 164 : cluster [DBG] 11.7 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:00.072091+0000 osd.0 (osd.0) 165 : cluster [DBG] 11.7 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 4030464 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 165)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:00.061437+0000 osd.0 (osd.0) 164 : cluster [DBG] 11.7 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:00.072091+0000 osd.0 (osd.0) 165 : cluster [DBG] 11.7 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:31.644570+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:01.030710+0000 osd.0 (osd.0) 166 : cluster [DBG] 11.4 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:01.041284+0000 osd.0 (osd.0) 167 : cluster [DBG] 11.4 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 4030464 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824002 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.433236122s of 10.488536835s, submitted: 65
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=53) [0] r=0 lpr=53 crt=40'1059 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 61.435893 137 0.001785
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=53) [0] r=0 lpr=53 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 61.440356 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=53) [0] r=0 lpr=53 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 62.443238 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=53) [0] r=0 lpr=53 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 62.443279 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=53) [0] r=0 lpr=53 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564367294s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 261.555847168s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564089775s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.555847168s@ mbc={}] exit Reset 0.000320 1 0.000596
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564089775s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.555847168s@ mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564089775s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.555847168s@ mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564089775s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.555847168s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564089775s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.555847168s@ mbc={}] exit Start 0.000127 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564089775s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.555847168s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 167)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:01.030710+0000 osd.0 (osd.0) 166 : cluster [DBG] 11.4 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:01.041284+0000 osd.0 (osd.0) 167 : cluster [DBG] 11.4 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:32.644711+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:02.002268+0000 osd.0 (osd.0) 168 : cluster [DBG] 5.9 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:02.012616+0000 osd.0 (osd.0) 169 : cluster [DBG] 5.9 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 4022272 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a0000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 169)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:02.002268+0000 osd.0 (osd.0) 168 : cluster [DBG] 5.9 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:02.012616+0000 osd.0 (osd.0) 169 : cluster [DBG] 5.9 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023387 3 0.000577
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.023914 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000043 1 0.000068
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000032
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:33.644824+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:02.997071+0000 osd.0 (osd.0) 170 : cluster [DBG] 11.1 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:03.007965+0000 osd.0 (osd.0) 171 : cluster [DBG] 11.1 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 3997696 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 171)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:02.997071+0000 osd.0 (osd.0) 170 : cluster [DBG] 11.1 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:03.007965+0000 osd.0 (osd.0) 171 : cluster [DBG] 11.1 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012636 4 0.000046
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.012719 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 101 handle_osd_map epochs [100,101], i have 101, src has [1,101]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:34.644936+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:03.987451+0000 osd.0 (osd.0) 172 : cluster [DBG] 3.3 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:03.998091+0000 osd.0 (osd.0) 173 : cluster [DBG] 3.3 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 3981312 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 101 heartbeat osd_stat(store_statfs(0x4fca69000/0x0/0x4ffc00000, data 0x12230e/0x1b2000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.612325 5 0.000236
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000090 1 0.000088
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000535 1 0.000054
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.014216 2 0.000036
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.378852 1 0.000045
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.006177 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.018921 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.018951 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605948448s) [2] async=[2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 269.640716553s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605783463s) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.640716553s@ mbc={}] exit Reset 0.000212 1 0.000290
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605783463s) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.640716553s@ mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605783463s) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.640716553s@ mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605783463s) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.640716553s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605783463s) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.640716553s@ mbc={}] exit Start 0.000104 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605783463s) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.640716553s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 173)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:03.987451+0000 osd.0 (osd.0) 172 : cluster [DBG] 3.3 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:03.998091+0000 osd.0 (osd.0) 173 : cluster [DBG] 3.3 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:35.645050+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:05.000385+0000 osd.0 (osd.0) 174 : cluster [DBG] 9.e scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:05.011007+0000 osd.0 (osd.0) 175 : cluster [DBG] 9.e scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 3964928 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a1000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 102 heartbeat osd_stat(store_statfs(0x4fca65000/0x0/0x4ffc00000, data 0x1242c5/0x1b5000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:36.645172+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 4 last_log 177 sent 175 num 4 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:06.004409+0000 osd.0 (osd.0) 176 : cluster [DBG] 5.2 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:06.014745+0000 osd.0 (osd.0) 177 : cluster [DBG] 5.2 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 175)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:05.000385+0000 osd.0 (osd.0) 174 : cluster [DBG] 9.e scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:05.011007+0000 osd.0 (osd.0) 175 : cluster [DBG] 9.e scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _renew_subs
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.338445 6 0.000245
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001112 2 0.000055
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 DELETING pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029484 2 0.000098
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.030635 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.369253 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 3956736 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844352 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fca65000/0x0/0x4ffc00000, data 0x1242c5/0x1b5000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:37.645302+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 4 last_log 179 sent 177 num 4 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:06.972914+0000 osd.0 (osd.0) 178 : cluster [DBG] 11.f scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:06.990912+0000 osd.0 (osd.0) 179 : cluster [DBG] 11.f scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 177)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:06.004409+0000 osd.0 (osd.0) 176 : cluster [DBG] 5.2 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:06.014745+0000 osd.0 (osd.0) 177 : cluster [DBG] 5.2 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 5005312 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:38.645425+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 4 last_log 181 sent 179 num 4 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:08.014044+0000 osd.0 (osd.0) 180 : cluster [DBG] 5.7 deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:08.024646+0000 osd.0 (osd.0) 181 : cluster [DBG] 5.7 deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 179)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:06.972914+0000 osd.0 (osd.0) 178 : cluster [DBG] 11.f scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:06.990912+0000 osd.0 (osd.0) 179 : cluster [DBG] 11.f scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 181)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:08.014044+0000 osd.0 (osd.0) 180 : cluster [DBG] 5.7 deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:08.024646+0000 osd.0 (osd.0) 181 : cluster [DBG] 5.7 deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 4997120 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a0800
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:39.645540+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:09.053907+0000 osd.0 (osd.0) 182 : cluster [DBG] 5.16 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:09.064441+0000 osd.0 (osd.0) 183 : cluster [DBG] 5.16 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 183)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:09.053907+0000 osd.0 (osd.0) 182 : cluster [DBG] 5.16 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:09.064441+0000 osd.0 (osd.0) 183 : cluster [DBG] 5.16 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 4980736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:40.645664+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:10.018576+0000 osd.0 (osd.0) 184 : cluster [DBG] 8.8 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:10.029159+0000 osd.0 (osd.0) 185 : cluster [DBG] 8.8 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 185)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:10.018576+0000 osd.0 (osd.0) 184 : cluster [DBG] 8.8 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:10.029159+0000 osd.0 (osd.0) 185 : cluster [DBG] 8.8 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x12c483/0x1c1000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 106 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 4972544 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:41.645831+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:11.033291+0000 osd.0 (osd.0) 186 : cluster [DBG] 8.4 deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:11.043899+0000 osd.0 (osd.0) 187 : cluster [DBG] 8.4 deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 187)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:11.033291+0000 osd.0 (osd.0) 186 : cluster [DBG] 8.4 deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:11.043899+0000 osd.0 (osd.0) 187 : cluster [DBG] 8.4 deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 4964352 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 863956 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:42.645984+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:12.006218+0000 osd.0 (osd.0) 188 : cluster [DBG] 11.1d scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:12.016609+0000 osd.0 (osd.0) 189 : cluster [DBG] 11.1d scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.360844612s of 10.417983055s, submitted: 97
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 189)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:12.006218+0000 osd.0 (osd.0) 188 : cluster [DBG] 11.1d scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:12.016609+0000 osd.0 (osd.0) 189 : cluster [DBG] 11.1d scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 4956160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 108 heartbeat osd_stat(store_statfs(0x4fca53000/0x0/0x4ffc00000, data 0x130581/0x1c7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 108 handle_osd_map epochs [109,110], i have 108, src has [1,110]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 108 handle_osd_map epochs [109,110], i have 110, src has [1,110]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:43.646130+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:13.008917+0000 osd.0 (osd.0) 190 : cluster [DBG] 3.10 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:13.019444+0000 osd.0 (osd.0) 191 : cluster [DBG] 3.10 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 110 heartbeat osd_stat(store_statfs(0x4fca53000/0x0/0x4ffc00000, data 0x130581/0x1c7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.f deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.f deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 191)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:13.008917+0000 osd.0 (osd.0) 190 : cluster [DBG] 3.10 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:13.019444+0000 osd.0 (osd.0) 191 : cluster [DBG] 3.10 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 4882432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:44.646291+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:14.040126+0000 osd.0 (osd.0) 192 : cluster [DBG] 5.f deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:14.050642+0000 osd.0 (osd.0) 193 : cluster [DBG] 5.f deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 193)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:14.040126+0000 osd.0 (osd.0) 192 : cluster [DBG] 5.f deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:14.050642+0000 osd.0 (osd.0) 193 : cluster [DBG] 5.f deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 4882432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:45.646472+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:15.042208+0000 osd.0 (osd.0) 194 : cluster [DBG] 5.10 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:15.052741+0000 osd.0 (osd.0) 195 : cluster [DBG] 5.10 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 111 heartbeat osd_stat(store_statfs(0x4fca4a000/0x0/0x4ffc00000, data 0x13649d/0x1d0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 195)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:15.042208+0000 osd.0 (osd.0) 194 : cluster [DBG] 5.10 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:15.052741+0000 osd.0 (osd.0) 195 : cluster [DBG] 5.10 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 4874240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 111 heartbeat osd_stat(store_statfs(0x4fca4a000/0x0/0x4ffc00000, data 0x13649d/0x1d0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:46.646621+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:16.052775+0000 osd.0 (osd.0) 196 : cluster [DBG] 4.e scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:16.063161+0000 osd.0 (osd.0) 197 : cluster [DBG] 4.e scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 111 ms_handle_reset con 0x560c9c8a0800 session 0x560c9b5625a0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 111 ms_handle_reset con 0x560c9c8a0000 session 0x560c9dae41e0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 111 heartbeat osd_stat(store_statfs(0x4fca4a000/0x0/0x4ffc00000, data 0x13649d/0x1d0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 197)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:16.052775+0000 osd.0 (osd.0) 196 : cluster [DBG] 4.e scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:16.063161+0000 osd.0 (osd.0) 197 : cluster [DBG] 4.e scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 4825088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876276 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:47.646748+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:17.070349+0000 osd.0 (osd.0) 198 : cluster [DBG] 4.5 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:17.080984+0000 osd.0 (osd.0) 199 : cluster [DBG] 4.5 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 199)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:17.070349+0000 osd.0 (osd.0) 198 : cluster [DBG] 4.5 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:17.080984+0000 osd.0 (osd.0) 199 : cluster [DBG] 4.5 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 4825088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:48.646911+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:18.090399+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.11 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:18.100961+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.11 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 201)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:18.090399+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.11 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:18.100961+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.11 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 111 handle_osd_map epochs [112,113], i have 111, src has [1,113]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 4775936 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fca4c000/0x0/0x4ffc00000, data 0x13649d/0x1d0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:49.647045+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:19.094532+0000 osd.0 (osd.0) 202 : cluster [DBG] 5.11 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:19.105094+0000 osd.0 (osd.0) 203 : cluster [DBG] 5.11 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 203)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:19.094532+0000 osd.0 (osd.0) 202 : cluster [DBG] 5.11 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:19.105094+0000 osd.0 (osd.0) 203 : cluster [DBG] 5.11 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 4775936 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:50.647227+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:20.092922+0000 osd.0 (osd.0) 204 : cluster [DBG] 11.1e deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:20.103525+0000 osd.0 (osd.0) 205 : cluster [DBG] 11.1e deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fca45000/0x0/0x4ffc00000, data 0x13a675/0x1d6000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 205)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:20.092922+0000 osd.0 (osd.0) 204 : cluster [DBG] 11.1e deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:20.103525+0000 osd.0 (osd.0) 205 : cluster [DBG] 11.1e deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 4759552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:51.647426+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:21.134995+0000 osd.0 (osd.0) 206 : cluster [DBG] 9.12 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:21.149182+0000 osd.0 (osd.0) 207 : cluster [DBG] 9.12 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 207)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:21.134995+0000 osd.0 (osd.0) 206 : cluster [DBG] 9.12 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:21.149182+0000 osd.0 (osd.0) 207 : cluster [DBG] 9.12 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 4751360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 887737 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:52.647567+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:22.175716+0000 osd.0 (osd.0) 208 : cluster [DBG] 8.12 deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:22.189936+0000 osd.0 (osd.0) 209 : cluster [DBG] 8.12 deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 209)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:22.175716+0000 osd.0 (osd.0) 208 : cluster [DBG] 8.12 deep-scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:22.189936+0000 osd.0 (osd.0) 209 : cluster [DBG] 8.12 deep-scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.160198212s of 10.195916176s, submitted: 38
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 4743168 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:53.647730+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 211 sent 209 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:23.204848+0000 osd.0 (osd.0) 210 : cluster [DBG] 11.14 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:23.218803+0000 osd.0 (osd.0) 211 : cluster [DBG] 11.14 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 211)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:23.204848+0000 osd.0 (osd.0) 210 : cluster [DBG] 11.14 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:23.218803+0000 osd.0 (osd.0) 211 : cluster [DBG] 11.14 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 4734976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:54.647893+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 213 sent 211 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:24.159533+0000 osd.0 (osd.0) 212 : cluster [DBG] 8.18 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:24.170122+0000 osd.0 (osd.0) 213 : cluster [DBG] 8.18 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 213)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:24.159533+0000 osd.0 (osd.0) 212 : cluster [DBG] 8.18 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:24.170122+0000 osd.0 (osd.0) 213 : cluster [DBG] 8.18 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 114 handle_osd_map epochs [115,116], i have 114, src has [1,116]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [0] r=0 lpr=79 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 51.111732 107 0.000551
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [0] r=0 lpr=79 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 51.112677 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [0] r=0 lpr=79 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 52.116691 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [0] r=0 lpr=79 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 52.116835 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [0] r=0 lpr=79 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888633728s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 active pruub 286.476196289s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888607979s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 286.476196289s@ mbc={}] exit Reset 0.000051 1 0.000093
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888607979s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 286.476196289s@ mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888607979s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 286.476196289s@ mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888607979s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 286.476196289s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888607979s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 286.476196289s@ mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888607979s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 286.476196289s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 116 handle_osd_map epochs [115,116], i have 116, src has [1,116]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4718592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:55.648025+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 215 sent 213 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:25.110753+0000 osd.0 (osd.0) 214 : cluster [DBG] 9.6 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:25.124893+0000 osd.0 (osd.0) 215 : cluster [DBG] 9.6 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 116 heartbeat osd_stat(store_statfs(0x4fc632000/0x0/0x4ffc00000, data 0x13c761/0x1d9000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.885412 3 0.000032
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.885452 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000058 1 0.000091
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001993 2 0.000042
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 215)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:25.110753+0000 osd.0 (osd.0) 214 : cluster [DBG] 9.6 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:25.124893+0000 osd.0 (osd.0) 215 : cluster [DBG] 9.6 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 4677632 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:56.648178+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 217 sent 215 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:26.123458+0000 osd.0 (osd.0) 216 : cluster [DBG] 8.17 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:26.137713+0000 osd.0 (osd.0) 217 : cluster [DBG] 8.17 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 117 handle_osd_map epochs [117,118], i have 118, src has [1,118]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003893 3 0.000109
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.006004 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.001670 5 0.000246
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000093 1 0.000076
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000426 1 0.000032
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.049484 2 0.000092
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 217)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:26.123458+0000 osd.0 (osd.0) 216 : cluster [DBG] 8.17 scrub starts
Oct 09 10:05:05 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:26.137713+0000 osd.0 (osd.0) 217 : cluster [DBG] 8.17 scrub ok
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a1800
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 4661248 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908847 data_alloc: 218103808 data_used: 286720
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:57.648317+0000)
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.952902 1 0.000067
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004903 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.010952 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.010983 0 0.000000
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996621132s) [1] async=[1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 40'1059 active pruub 291.480834961s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996500015s) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 291.480834961s@ mbc={}] exit Reset 0.000164 1 0.000388
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996500015s) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 291.480834961s@ mbc={}] enter Started
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996500015s) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 291.480834961s@ mbc={}] enter Start
Oct 09 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996500015s) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 291.480834961s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996500015s) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 291.480834961s@ mbc={}] exit Start 0.000043 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996500015s) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 291.480834961s@ mbc={}] enter Started/Stray
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 4653056 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc621000/0x0/0x4ffc00000, data 0x14687f/0x1e8000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:58.648420+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _renew_subs
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.010565 6 0.000260
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000504 2 0.000625
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 DELETING pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.053072 2 0.000214
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.053887 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.064819 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 4603904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:59.648553+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 4603904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:00.648665+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 4595712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:01.648732+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 4595712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902006 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:02.648879+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 219 sent 217 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:32.120342+0000 osd.0 (osd.0) 218 : cluster [DBG] 8.1b scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:32.130939+0000 osd.0 (osd.0) 219 : cluster [DBG] 8.1b scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 219)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:32.120342+0000 osd.0 (osd.0) 218 : cluster [DBG] 8.1b scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:32.130939+0000 osd.0 (osd.0) 219 : cluster [DBG] 8.1b scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 4587520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:03.649106+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 221 sent 219 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:33.105199+0000 osd.0 (osd.0) 220 : cluster [DBG] 8.10 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:33.119297+0000 osd.0 (osd.0) 221 : cluster [DBG] 8.10 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.481028557s of 10.509338379s, submitted: 33
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 121 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x1487b3/0x1ea000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 221)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:33.105199+0000 osd.0 (osd.0) 220 : cluster [DBG] 8.10 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:33.119297+0000 osd.0 (osd.0) 221 : cluster [DBG] 8.10 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 4579328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:04.649322+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 223 sent 221 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:34.117601+0000 osd.0 (osd.0) 222 : cluster [DBG] 9.f scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:34.135252+0000 osd.0 (osd.0) 223 : cluster [DBG] 9.f scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 223)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:34.117601+0000 osd.0 (osd.0) 222 : cluster [DBG] 9.f scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:34.135252+0000 osd.0 (osd.0) 223 : cluster [DBG] 9.f scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 4579328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 121 heartbeat osd_stat(store_statfs(0x4fc61e000/0x0/0x4ffc00000, data 0x14a89f/0x1ed000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=84) [0] r=0 lpr=84 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 56.544703 110 0.000890
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=84) [0] r=0 lpr=84 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 56.546599 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=84) [0] r=0 lpr=84 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 57.552659 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=84) [0] r=0 lpr=84 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 57.552698 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=84) [0] r=0 lpr=84 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455636024s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 active pruub 299.496215820s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455393791s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 299.496215820s@ mbc={}] exit Reset 0.000286 1 0.000376
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455393791s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 299.496215820s@ mbc={}] enter Started
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455393791s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 299.496215820s@ mbc={}] enter Start
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455393791s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 299.496215820s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455393791s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 299.496215820s@ mbc={}] exit Start 0.000124 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455393791s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 299.496215820s@ mbc={}] enter Started/Stray
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 122 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:05.649554+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 225 sent 223 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:35.108940+0000 osd.0 (osd.0) 224 : cluster [DBG] 6.e scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:35.123469+0000 osd.0 (osd.0) 225 : cluster [DBG] 6.e scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.437780 3 0.000259
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.437976 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000065 1 0.000114
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000049
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 225)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:35.108940+0000 osd.0 (osd.0) 224 : cluster [DBG] 6.e scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:35.123469+0000 osd.0 (osd.0) 225 : cluster [DBG] 6.e scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 4562944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:06.649732+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 227 sent 225 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:36.096817+0000 osd.0 (osd.0) 226 : cluster [DBG] 6.5 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:36.114486+0000 osd.0 (osd.0) 227 : cluster [DBG] 6.5 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 124 handle_osd_map epochs [123,124], i have 124, src has [1,124]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000506 4 0.000053
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000616 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 227)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:36.096817+0000 osd.0 (osd.0) 226 : cluster [DBG] 6.5 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:36.114486+0000 osd.0 (osd.0) 227 : cluster [DBG] 6.5 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 4554752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920632 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:07.649941+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 229 sent 227 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:37.143355+0000 osd.0 (osd.0) 228 : cluster [DBG] 6.2 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:37.153996+0000 osd.0 (osd.0) 229 : cluster [DBG] 6.2 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.929512 5 0.000230
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000052 1 0.000056
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000620 1 0.000022
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.014229 2 0.000090
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 229)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:37.143355+0000 osd.0 (osd.0) 228 : cluster [DBG] 6.2 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:37.153996+0000 osd.0 (osd.0) 229 : cluster [DBG] 6.2 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.254469 1 0.000142
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.199120 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.199756 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.199779 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730270386s) [1] async=[1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 40'1059 active pruub 302.409027100s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] exit Reset 0.000089 1 0.000138
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] enter Started
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] enter Start
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] enter Started/Stray
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4521984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 125 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:08.650074+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 231 sent 229 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:38.131309+0000 osd.0 (osd.0) 230 : cluster [DBG] 6.a scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:38.141838+0000 osd.0 (osd.0) 231 : cluster [DBG] 6.a scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4521984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 231)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:38.131309+0000 osd.0 (osd.0) 230 : cluster [DBG] 6.a scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:38.141838+0000 osd.0 (osd.0) 231 : cluster [DBG] 6.a scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:09.650201+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 233 sent 231 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:39.145733+0000 osd.0 (osd.0) 232 : cluster [DBG] 6.3 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:39.163525+0000 osd.0 (osd.0) 233 : cluster [DBG] 6.3 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _renew_subs
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.861375 6 0.000071
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000940 2 0.000043
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 DELETING pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.039841 2 0.000114
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.040842 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.902274 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 4513792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 233)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:39.145733+0000 osd.0 (osd.0) 232 : cluster [DBG] 6.3 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:39.163525+0000 osd.0 (osd.0) 233 : cluster [DBG] 6.3 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:10.650339+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 235 sent 233 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:40.139145+0000 osd.0 (osd.0) 234 : cluster [DBG] 10.1a scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:40.186147+0000 osd.0 (osd.0) 235 : cluster [DBG] 10.1a scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 4513792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 235)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:40.139145+0000 osd.0 (osd.0) 234 : cluster [DBG] 10.1a scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:40.186147+0000 osd.0 (osd.0) 235 : cluster [DBG] 10.1a scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc60f000/0x0/0x4ffc00000, data 0x154aa9/0x1fb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 77.464063 170 0.000532
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 77.465815 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 78.470504 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 78.470528 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538806915s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 active pruub 300.480133057s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] exit Reset 0.000078 1 0.000123
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] enter Started
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] enter Start
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] enter Started/Stray
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:11.650504+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 237 sent 235 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:41.125652+0000 osd.0 (osd.0) 236 : cluster [DBG] 10.1d scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:41.153898+0000 osd.0 (osd.0) 237 : cluster [DBG] 10.1d scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4505600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929125 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 237)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:41.125652+0000 osd.0 (osd.0) 236 : cluster [DBG] 10.1d scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:41.153898+0000 osd.0 (osd.0) 237 : cluster [DBG] 10.1d scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.770187 3 0.000226
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.770216 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000058 1 0.000081
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000035
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:12.650671+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 239 sent 237 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:42.125609+0000 osd.0 (osd.0) 238 : cluster [DBG] 10.9 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:42.153846+0000 osd.0 (osd.0) 239 : cluster [DBG] 10.9 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4497408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 239)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:42.125609+0000 osd.0 (osd.0) 238 : cluster [DBG] 10.9 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:42.153846+0000 osd.0 (osd.0) 239 : cluster [DBG] 10.9 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 128 handle_osd_map epochs [128,129], i have 129, src has [1,129]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002967 4 0.000048
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003062 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:13.650847+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 241 sent 239 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:43.103288+0000 osd.0 (osd.0) 240 : cluster [DBG] 10.c scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:43.131563+0000 osd.0 (osd.0) 241 : cluster [DBG] 10.c scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.287339211s of 10.341490746s, submitted: 63
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f(unlocked)] enter Initial
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=0 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=0 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000024
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000106 1 0.000033
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000143 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.917700 5 0.000274
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000080 1 0.000041
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000345 1 0.000023
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035394 2 0.000101
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4497408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 241)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:43.103288+0000 osd.0 (osd.0) 240 : cluster [DBG] 10.c scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:43.131563+0000 osd.0 (osd.0) 241 : cluster [DBG] 10.c scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 129 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.100369 2 0.000045
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.100600 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.064024 1 0.000046
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.017850 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.020940 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.020967 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.100759 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.899759293s) [2] async=[2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 40'1059 active pruub 308.632507324s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000458 1 0.000730
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000102 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] exit Reset 0.004411 1 0.004537
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] enter Started
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] enter Start
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] exit Start 0.000009 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] enter Started/Stray
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc607000/0x0/0x4ffc00000, data 0x15ac76/0x204000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:14.650986+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 243 sent 241 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:44.055789+0000 osd.0 (osd.0) 242 : cluster [DBG] 10.6 deep-scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:44.087089+0000 osd.0 (osd.0) 243 : cluster [DBG] 10.6 deep-scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 4489216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 243)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:44.055789+0000 osd.0 (osd.0) 242 : cluster [DBG] 10.6 deep-scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:44.087089+0000 osd.0 (osd.0) 243 : cluster [DBG] 10.6 deep-scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x15cc5f/0x207000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 130 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.207355 5 0.000520
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.203806 6 0.000178
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001910 2 0.000149
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003054 4 0.000130
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000064 1 0.000036
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 DELETING pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.042146 2 0.000241
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.044125 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.248037 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.070374 1 0.000064
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:15.651601+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 245 sent 243 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:45.026245+0000 osd.0 (osd.0) 244 : cluster [DBG] 10.a scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:45.068603+0000 osd.0 (osd.0) 245 : cluster [DBG] 10.a scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.466371 1 0.000036
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.539968 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 1.747725 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000078 1 0.000113
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000469 2 0.000032
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=32
Oct 09 10:05:06 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=32
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001068 2 0.000055
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 245)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:45.026245+0000 osd.0 (osd.0) 244 : cluster [DBG] 10.a scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:45.068603+0000 osd.0 (osd.0) 245 : cluster [DBG] 10.a scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 132 ms_handle_reset con 0x560c9c8a1800 session 0x560c9d630d20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:16.651802+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 247 sent 245 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:45.994370+0000 osd.0 (osd.0) 246 : cluster [DBG] 10.0 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:46.033184+0000 osd.0 (osd.0) 247 : cluster [DBG] 10.0 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 132 handle_osd_map epochs [132,133], i have 133, src has [1,133]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002487 2 0.000117
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004414 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002016 4 0.001053
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952396 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 247)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:45.994370+0000 osd.0 (osd.0) 246 : cluster [DBG] 10.0 scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:46.033184+0000 osd.0 (osd.0) 247 : cluster [DBG] 10.0 scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:17.651995+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 249 sent 247 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:46.955262+0000 osd.0 (osd.0) 248 : cluster [DBG] 10.d scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:46.993836+0000 osd.0 (osd.0) 249 : cluster [DBG] 10.d scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 249)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:46.955262+0000 osd.0 (osd.0) 248 : cluster [DBG] 10.d scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:46.993836+0000 osd.0 (osd.0) 249 : cluster [DBG] 10.d scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:18.652114+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 251 sent 249 num 2 unsent 2 sending 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:47.939990+0000 osd.0 (osd.0) 250 : cluster [DBG] 10.b scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:47.964665+0000 osd.0 (osd.0) 251 : cluster [DBG] 10.b scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 251)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:47.939990+0000 osd.0 (osd.0) 250 : cluster [DBG] 10.b scrub starts
Oct 09 10:05:06 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:47.964665+0000 osd.0 (osd.0) 251 : cluster [DBG] 10.b scrub ok
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:19.652284+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 4415488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:20.652404+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4407296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:21.652549+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953544 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:22.652660+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:23.652804+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:24.652956+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:25.653111+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:26.653242+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a066000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.116673470s of 13.153404236s, submitted: 45
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4382720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953676 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:27.653378+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4382720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:28.653541+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 4374528 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:29.653737+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 4472832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:30.653834+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 4472832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:31.653928+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 4464640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954348 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:32.654012+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a1000 session 0x560c9d2dc5a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9b7d1800 session 0x560c9d8512c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 4464640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:33.654143+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9cbe2c00 session 0x560c9c5994a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9cf88000 session 0x560c9d20c960
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 4456448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:34.654311+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 4456448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:35.654457+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 4448256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:36.654584+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 4448256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954348 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:37.654760+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:38.654861+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:39.655773+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:40.655931+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:41.656130+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.990660667s of 14.992744446s, submitted: 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 4415488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954216 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:42.656297+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4407296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:43.656441+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a0000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4407296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:44.656552+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a0800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:45.656699+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:46.656832+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954480 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:47.656958+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:48.657068+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:49.657169+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4382720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:50.657290+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab7400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 4366336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:51.657407+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 4366336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955992 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:52.657556+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 4358144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:53.657665+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 4358144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:54.657816+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9a066000 session 0x560c9d208d20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4349952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:55.658001+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.551061630s of 13.556247711s, submitted: 4
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4349952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:56.658144+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4341760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955401 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:57.658248+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4341760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:58.658348+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4341760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:59.658513+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 4333568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:00.658632+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 4333568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:01.658780+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 4325376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:02.658927+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 4325376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:03.659061+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 4325376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:04.659154+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 4317184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:05.659269+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 4308992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:06.659372+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4300800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:07.659513+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4300800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:08.659640+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4300800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:09.659773+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 4292608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:10.659929+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 4292608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:11.660030+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 4284416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:12.660160+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 4284416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:13.660277+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 4284416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:14.660425+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 4276224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:15.660581+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 4276224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:16.660740+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 4268032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:17.660840+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 4268032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:18.660935+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4259840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:19.661027+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4259840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:20.661128+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4259840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:21.661234+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.443452835s of 26.446563721s, submitted: 3
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4243456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:22.661335+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4243456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:23.661432+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4243456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:24.661537+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 4235264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:25.661652+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 4235264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:26.661775+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 4227072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:27.661887+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 4227072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:28.661979+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4218880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:29.662119+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4218880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:30.662270+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4218880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:31.662412+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:32.662527+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:33.662636+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:34.662727+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:35.662829+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:36.662931+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:37.663029+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:38.663120+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:39.663221+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:40.663337+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:41.663431+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 4194304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:42.663543+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 4194304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:43.663673+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 4194304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:44.663802+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 4186112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:45.663938+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 4186112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:46.664078+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4177920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:47.664204+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4177920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:48.664297+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4177920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:49.664390+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 4169728 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:50.664634+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 4169728 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:51.664731+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 4161536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:52.664819+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 4161536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:53.664925+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4153344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:54.665055+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4153344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:55.665225+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4153344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:56.665354+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4145152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:57.665483+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4145152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:58.665606+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4145152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:59.665723+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 4136960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:00.665865+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 4136960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:01.666021+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9dab7400 session 0x560c9d20f860
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a0800 session 0x560c9cf7a960
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4128768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:02.666135+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4128768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:03.666239+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4128768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:04.666346+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 4120576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:05.666527+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 4120576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:06.666656+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 4112384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:07.666818+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 4112384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:08.666950+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4104192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:09.667126+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4104192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:10.667232+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4104192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:11.667381+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 4096000 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9b7d1800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 50.453056335s of 50.454822540s, submitted: 1
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:12.667484+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 4096000 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:13.667630+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 4087808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:14.667749+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 4087808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:15.667884+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 4079616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:16.668019+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 4079616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:17.668152+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 4079616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:18.668280+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a1000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 4071424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:19.668383+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 4071424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:20.668532+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 4063232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:21.668635+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 4063232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:22.668749+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 4055040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:23.668859+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 4055040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:24.668962+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 4055040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:25.669089+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 4046848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:26.669210+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 4046848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:27.669307+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 4038656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:28.669402+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 4038656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.000545502s of 17.003219604s, submitted: 2
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:29.669496+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 4030464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:30.669593+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 4030464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:31.669704+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 4030464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:32.669842+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 4022272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:33.669948+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 4022272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:34.670050+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 4022272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:35.670185+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 4014080 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:36.670290+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 4014080 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:37.670390+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 4005888 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:38.670485+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 4005888 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:39.670578+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 3997696 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:40.670721+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 3997696 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:41.670832+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 3989504 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:42.670937+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 3989504 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:43.671026+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 3981312 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:44.671176+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 3964928 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:45.671293+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 3964928 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:46.671607+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3956736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:47.671710+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3956736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:48.671810+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3956736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:49.671902+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 3948544 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:50.672006+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 3948544 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:51.672131+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 3940352 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:52.672249+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d2dd0e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a0000 session 0x560c9d20fa40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 3940352 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:53.672362+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:54.672495+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:55.672669+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:56.672813+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:57.672955+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:58.673130+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:59.673265+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:00.673366+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 3923968 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:01.673474+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 3923968 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:02.673623+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 3915776 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:03.673738+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 3915776 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9cbe2c00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.123451233s of 34.124847412s, submitted: 1
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:04.673877+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 3907584 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:05.674001+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 3907584 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:06.674146+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3899392 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:07.674254+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3899392 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:08.674373+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3899392 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:09.674477+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 3891200 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:10.674581+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 3883008 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:11.674684+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 3883008 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:12.674994+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 3874816 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:13.675117+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 3874816 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:14.675318+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3866624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:15.675465+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3866624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:16.675567+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3866624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:17.675704+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 3858432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:18.675813+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 3858432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.923893929s of 14.924749374s, submitted: 1
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:19.675931+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3850240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:20.676061+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3850240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:21.676159+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3850240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:22.676253+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 3842048 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:23.676396+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 3842048 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:24.676508+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 3833856 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:25.676637+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 3833856 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:26.676743+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3825664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:27.676859+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3825664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:28.676982+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3825664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:29.677132+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 3817472 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:30.677266+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 3809280 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:31.677357+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3801088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:32.677451+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3801088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:33.677552+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3801088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:34.677656+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 3792896 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:35.677718+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 3792896 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:36.677814+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 3784704 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:37.677923+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 3784704 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:38.678014+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3776512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:39.678143+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3776512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:40.678229+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3776512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:41.678322+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 3768320 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:42.678412+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 3768320 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:43.678513+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3760128 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:44.678634+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3760128 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:45.678763+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3760128 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:46.679008+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 3751936 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:47.679116+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 3751936 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:48.679220+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3743744 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:49.680003+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3743744 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:50.680107+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 3735552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:51.680224+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 3735552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:52.680355+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 3735552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:53.680491+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 3727360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:54.680627+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 3727360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:55.680760+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 3710976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:56.680866+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 3710976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:57.680999+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 3710976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:58.681108+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 3702784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:59.681234+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 3702784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:00.681333+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 3694592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:01.681447+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 3694592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:02.681541+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 3686400 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:03.681651+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 3686400 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:04.681745+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 3678208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:05.681848+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 3678208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:06.681943+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 3678208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:07.682044+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 3670016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:08.682140+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 3670016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:09.682235+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 3670016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:10.682334+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 3661824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:11.682444+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 3661824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:12.682546+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 3661824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:13.682650+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 3653632 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:14.682724+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 3653632 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:15.682831+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 3637248 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:16.682926+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 3637248 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:17.683026+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 3629056 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:18.683422+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 3629056 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:19.683534+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 3620864 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:20.683653+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 3620864 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:21.683735+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 3620864 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:22.683869+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 3612672 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:23.684006+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 3612672 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:24.684120+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 3604480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:25.684248+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 3604480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:26.684389+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 3596288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:27.684494+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 3596288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:28.684603+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 3596288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:29.684726+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 3588096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:30.684828+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:31.684925+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:32.685031+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:33.685122+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:34.685224+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 3571712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:35.685346+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 3571712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:36.685449+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3563520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:37.685551+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3563520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:38.685662+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3555328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:39.685788+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3555328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:40.685894+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3538944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:41.686025+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3538944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:42.686157+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3538944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:43.686257+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 3530752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:44.686378+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 3530752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:45.686530+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3522560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:46.686647+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3522560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:47.686786+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 3514368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:48.686960+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 3514368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:49.687066+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 3514368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:50.687220+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3506176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:51.687331+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3506176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:52.687446+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3497984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:53.687558+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3497984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:54.687766+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3497984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:55.687895+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 3489792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:56.688002+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 3489792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:57.688101+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3481600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:58.688197+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3481600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:59.688295+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3481600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:00.688405+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3465216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:01.688503+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3465216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:02.688626+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:03.688751+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:04.688855+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:05.688973+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:06.689090+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:07.689208+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:08.689346+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3440640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:09.689462+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3440640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:10.689902+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 3432448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:11.689993+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 3432448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:12.690106+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3424256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:13.690246+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3416064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:14.690379+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3416064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:15.690526+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3407872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:16.690617+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3407872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:17.690739+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 3399680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:18.690857+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 3399680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:19.691010+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3391488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:20.691145+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3391488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Cumulative writes: 8413 writes, 33K keys, 8413 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                          Cumulative WAL: 8413 writes, 1875 syncs, 4.49 writes per sync, written: 0.02 GB, 0.04 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 8413 writes, 33K keys, 8413 commit groups, 1.0 writes per commit group, ingest: 21.18 MB, 0.04 MB/s
                                          Interval WAL: 8413 writes, 1875 syncs, 4.49 writes per sync, written: 0.02 GB, 0.04 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
                                          
                                          ** Compaction Stats [m-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-0] **
                                          
                                          ** Compaction Stats [m-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-1] **
                                          
                                          ** Compaction Stats [m-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-2] **
                                          
                                          ** Compaction Stats [p-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-0] **
                                          
                                          ** Compaction Stats [p-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-1] **
                                          
                                          ** Compaction Stats [p-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-2] **
                                          
                                          ** Compaction Stats [O-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-0] **
                                          
                                          ** Compaction Stats [O-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-1] **
                                          
                                          ** Compaction Stats [O-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-2] **
                                          
                                          ** Compaction Stats [L] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [L] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [L] **
                                          
                                          ** Compaction Stats [P] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [P] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [P] **
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:21.691276+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:22.691414+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:23.691507+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:24.691599+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3317760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:25.691714+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3309568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:26.691884+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:27.692050+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:28.692213+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:29.692377+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3293184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:30.692538+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3293184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:31.692661+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:32.692806+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:33.692950+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:34.693109+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3276800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:35.693260+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3276800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:36.693392+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 3268608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:37.693497+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 3268608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:38.693633+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:39.693780+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:40.693931+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:41.694061+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 3252224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:42.694181+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 3252224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:43.694324+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3244032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:44.694455+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3244032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:45.694584+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3235840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:46.694702+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3227648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:47.694829+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3227648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:48.694976+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:49.695082+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:50.695190+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:51.695324+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:52.695416+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:53.695535+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:54.695647+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3203072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:55.695809+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3203072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:56.695922+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:57.696056+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:58.696201+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:59.696342+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:00.696468+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:01.696607+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3178496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:02.696739+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3178496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:03.696853+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 3170304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:04.696996+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 3170304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:05.697175+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:06.697301+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:07.697416+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:08.697518+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 3153920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:09.697613+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 3153920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:10.697727+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:11.697817+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:12.697915+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:13.698017+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3129344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:14.698123+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3129344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:15.698239+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:16.698347+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:17.698451+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:18.698543+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:19.698643+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:20.698739+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:21.698839+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3104768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:22.698944+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3104768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:23.699061+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 3096576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:24.699196+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 3096576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:25.699325+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:26.699429+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:27.699535+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:28.699651+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 3080192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:29.699731+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 3080192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:30.699825+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 3063808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:31.699920+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 3063808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:32.700008+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 3055616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:33.700115+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 3055616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:34.700227+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 3055616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:35.700342+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 3047424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:36.700450+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 3047424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:37.700615+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:38.700779+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:39.700891+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:40.701029+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 3031040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:41.701139+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 3031040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:42.701282+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 3022848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:43.701381+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 3022848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:44.701481+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 3014656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 206.934524536s of 206.935745239s, submitted: 1
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:45.701582+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [1])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 2695168 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:46.701674+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:47.701790+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:48.701882+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:49.702062+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:50.702169+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:51.702284+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:52.702399+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:53.702505+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:54.702619+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:55.702772+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:56.702872+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:57.702970+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:58.703093+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:59.703194+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:00.703316+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:01.703442+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:02.703552+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:03.703679+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:04.703790+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:05.703947+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:06.704062+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:07.704160+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:08.704310+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:09.704473+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:10.704581+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:11.704700+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:12.704802+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:13.704971+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:14.705138+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:15.705262+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:16.705362+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:17.705493+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:18.705586+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:19.705713+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:20.705811+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:21.705916+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:22.706019+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:23.706124+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:24.706225+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:25.706357+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:26.706456+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:27.706560+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:28.706655+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:29.706723+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:30.706825+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:31.706938+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:32.707038+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:33.707137+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:34.707238+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:35.707380+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:36.707494+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:37.707609+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:38.707757+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:39.707863+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:40.708004+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:41.708114+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:42.708224+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:43.708315+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:44.708406+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:45.708531+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:46.708620+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:47.708721+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:48.708830+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:49.708928+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:50.709025+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:51.709126+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:52.709223+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:53.709327+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:54.709431+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:55.709546+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:56.709650+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:57.709752+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:58.709884+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:59.709996+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:00.710096+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:01.710198+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:02.710293+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:03.710397+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:04.710516+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:05.710648+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:06.710795+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:07.710899+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:08.710992+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:09.711088+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:10.711190+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:11.711322+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:12.711423+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:13.711525+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:14.711677+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:15.711813+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:16.711934+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:17.712100+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:18.712223+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:19.712400+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:20.712530+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:21.712669+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:22.712742+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:23.712844+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:24.712961+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:25.713087+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:26.713229+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:27.713322+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:28.713420+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:29.713510+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:30.713611+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:31.713734+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:32.713834+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:33.713929+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:34.714024+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:35.714139+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:36.714241+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:37.714344+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:38.714451+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:39.714559+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:40.714744+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:41.714850+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:42.714944+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:43.715039+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:44.715136+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:45.715251+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:46.715360+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:47.715473+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:48.715601+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:49.715708+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:50.715829+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:51.715945+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:52.716041+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:53.716156+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:54.716279+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:55.716394+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:56.716494+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:57.716585+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:58.716695+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:59.716808+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:00.716911+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:01.717012+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:02.717114+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:03.717250+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:04.717375+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:05.717504+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:06.717619+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:07.717735+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:08.717827+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:09.717929+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:10.718040+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:11.718153+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:12.718261+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:13.718356+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:14.718447+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:15.718577+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:16.718680+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:17.718835+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:18.718958+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:19.719055+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:20.719145+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:21.719235+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:22.719581+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:23.719709+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:24.719802+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:25.719915+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:26.720012+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:27.720103+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:28.720197+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:29.720291+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:30.720396+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:31.720499+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:32.720613+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:33.720783+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:34.720918+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:35.721075+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:36.721238+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:37.721348+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:38.721446+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:39.721570+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:40.721730+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:41.721837+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:42.721927+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:43.722017+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:44.722117+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:45.722240+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:46.722353+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:47.722458+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:48.722589+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:49.722730+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:50.722879+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:51.723023+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:52.723164+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:53.723296+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:54.723438+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:55.723611+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:56.723730+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:57.723848+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:58.723989+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:59.724101+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:00.724202+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:01.724671+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:02.724786+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:03.724883+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:04.725006+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:05.725116+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:06.725214+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:07.725325+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:08.725428+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:09.725561+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:10.725678+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:11.725788+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:12.725890+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:13.725986+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:14.726081+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:15.726205+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:16.726314+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:17.726424+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:18.726526+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:19.726639+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:20.726731+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:21.726841+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:22.726946+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:23.727042+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:24.727183+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:25.727305+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:26.727402+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:27.727492+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:28.727588+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:29.727701+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:30.727801+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:31.727925+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:32.728045+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:33.728163+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:34.728306+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:35.728450+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:36.728589+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:37.728710+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:38.728811+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:39.728924+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:40.729026+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:41.729130+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:42.729231+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:43.729345+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:44.729450+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:45.729583+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:46.729713+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:47.729813+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:48.729914+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:49.730025+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:50.730134+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:51.730245+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:52.730332+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:53.730471+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:54.730560+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:55.730709+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:56.730798+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:57.730903+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:58.730996+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:59.731085+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:00.731177+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:01.731266+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:02.731377+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:03.731603+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:04.731737+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:05.731854+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:06.732010+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:07.732130+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:08.732249+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:09.732350+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:10.732454+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:11.732565+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:12.732659+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:13.732720+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:14.732870+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:15.733010+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:16.733138+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:17.733246+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:18.733357+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:19.733525+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:20.733678+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:21.733845+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:22.733984+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:23.734115+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:24.734255+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:25.734372+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:26.734516+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:27.734657+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:28.734821+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:29.734968+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:30.735125+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:31.735280+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:32.735426+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:33.735575+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:34.735716+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:35.735876+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:36.736018+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:37.736187+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:38.736325+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:39.736447+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:40.736563+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:41.736680+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:42.736824+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:43.736937+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:44.737086+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:45.737236+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:46.737384+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:47.737520+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [1])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:48.737642+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:49.737758+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:50.737898+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:51.738031+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:52.738190+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:53.738339+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:54.738481+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:55.738622+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:56.738798+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:57.738915+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:58.739018+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:59.739173+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:00.739286+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:01.739430+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:02.739573+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:03.739709+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:04.739855+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:05.740011+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:06.740143+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:07.740285+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:08.740426+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:09.740562+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:10.740671+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:11.740735+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:12.740875+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:13.740997+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:14.741138+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:15.741292+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:16.741416+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:17.741551+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:18.741718+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:19.741890+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:20.742010+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9ade0c00 session 0x560c9b978780
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ade0c00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:21.742168+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:22.742320+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:23.742469+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:24.742612+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:25.742831+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:26.742992+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:27.743143+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:28.743290+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:29.743425+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:30.743560+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:31.743766+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:32.743932+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:33.744071+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:34.744255+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:35.744425+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:36.744581+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:37.744720+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:38.744859+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:39.745006+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:40.745159+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:41.745309+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:42.745425+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:43.745562+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:44.745715+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:45.745872+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:46.746005+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:47.746132+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:48.746279+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:49.746412+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:50.746573+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:51.746746+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:52.746881+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:53.747047+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:54.747201+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:55.747353+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:56.747523+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:57.747666+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:58.747820+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:59.747969+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:00.748114+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:01.748248+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:02.748390+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:03.748538+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:04.748714+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:05.748904+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:06.749035+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:07.749169+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:08.749310+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:09.749444+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:10.749540+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:11.749719+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:12.749871+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:13.749997+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:14.750114+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:15.750271+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:16.750412+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:17.750548+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:18.750654+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:19.750789+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:20.750892+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:21.751036+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:22.751180+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:23.751323+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:24.751471+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:25.751635+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:26.751770+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:27.751884+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:28.752026+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:29.752183+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:30.752331+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:31.752487+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:32.752628+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:33.752772+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:34.752919+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:35.753085+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:36.753231+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:37.753349+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:38.753487+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:39.753633+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:40.753778+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:41.753910+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:42.754022+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:43.754158+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:44.754305+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:45.754469+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:46.754600+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:47.754747+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:48.754878+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:49.755052+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:50.755196+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:51.755343+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:52.755481+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:53.755597+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:54.755748+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:55.755912+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:56.756058+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:57.756207+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:58.756345+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:59.756472+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:00.756602+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:01.756750+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:02.756866+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:03.757027+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:04.757167+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:05.757348+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:06.757475+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:07.757592+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:08.757715+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:09.757841+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:10.757969+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:11.758094+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:12.758220+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:13.758343+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:14.758478+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:15.758630+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:16.758775+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:17.758912+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:18.759043+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:19.759178+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:20.759357+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:21.759504+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:22.759684+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:23.759896+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:24.760067+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:25.760243+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:26.760437+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:27.760612+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:28.760789+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:29.760926+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:30.761092+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:31.761229+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:32.761374+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:33.761507+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:34.761645+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:35.761825+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:36.761957+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:37.762095+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:38.762224+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:39.762328+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:40.762438+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:41.762550+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:42.762726+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:43.762848+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:44.763008+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:45.763180+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:46.763321+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:47.763441+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:48.763613+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:49.763804+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:50.763943+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:51.764060+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:52.764201+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:53.764336+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:54.764486+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:55.764676+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:56.764854+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:57.765003+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:58.765133+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:59.765259+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:00.765534+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:01.765742+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 2400256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:02.765872+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 496.980529785s of 497.169708252s, submitted: 379
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc5f7000/0x0/0x4ffc00000, data 0x164ccd/0x214000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 2400256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc5f7000/0x0/0x4ffc00000, data 0x164ccd/0x214000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:03.766036+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _renew_subs
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 1310720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968373 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:04.766182+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 136 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b4752c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 1294336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab7400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:05.766316+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _renew_subs
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 137 ms_handle_reset con 0x560c9dab7400 session 0x560c9c598780
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 16670720 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:06.766465+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 16621568 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:07.766599+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 16621568 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:08.766747+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fb179000/0x0/0x4ffc00000, data 0x15db086/0x1690000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86138880 unmapped: 16588800 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114638 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:09.766896+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86138880 unmapped: 16588800 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:10.767031+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:11.767177+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:12.767318+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:13.767466+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:14.767572+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:15.767716+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:16.767823+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:17.767971+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:18.768122+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:19.768227+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:20.768344+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Cumulative writes: 9273 writes, 35K keys, 9273 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                          Cumulative WAL: 9273 writes, 2281 syncs, 4.07 writes per sync, written: 0.02 GB, 0.02 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 860 writes, 1592 keys, 860 commit groups, 1.0 writes per commit group, ingest: 0.67 MB, 0.00 MB/s
                                          Interval WAL: 860 writes, 406 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
                                          
                                          ** Compaction Stats [m-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-0] **
                                          
                                          ** Compaction Stats [m-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-1] **
                                          
                                          ** Compaction Stats [m-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-2] **
                                          
                                          ** Compaction Stats [p-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-0] **
                                          
                                          ** Compaction Stats [p-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-1] **
                                          
                                          ** Compaction Stats [p-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-2] **
                                          
                                          ** Compaction Stats [O-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-0] **
                                          
                                          ** Compaction Stats [O-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-1] **
                                          
                                          ** Compaction Stats [O-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-2] **
                                          
                                          ** Compaction Stats [L] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [L] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [L] **
                                          
                                          ** Compaction Stats [P] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [P] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [P] **
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:21.768452+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:22.768590+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:23.768705+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:24.768853+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:25.768990+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:26.769114+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:27.769222+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:28.769361+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:29.769468+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:30.769598+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:31.769711+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:32.769853+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:33.769949+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:34.770080+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:35.770207+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:36.770341+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:37.770442+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:38.770545+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:39.770644+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:40.770778+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:41.770869+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d851e00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d81d0e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9cf88000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9cf88000 session 0x560c9d81cd20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:42.771005+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.565647125s of 40.628654480s, submitted: 75
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d81cb40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d2912c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:43.771113+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b907e00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab7400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 16531456 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114524 data_alloc: 218103808 data_used: 286720
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:44.771235+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _renew_subs
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 16531456 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:45.771348+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _renew_subs
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9dab7400 session 0x560c9df7c5a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9daae000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9daae000 session 0x560c9cb93c20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d815a40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d5b65a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b9790e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 15712256 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:46.771480+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fa989000/0x0/0x4ffc00000, data 0x1dc82a7/0x1e82000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 15712256 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:47.771581+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab7400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9dab7400 session 0x560c9cecfe00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fa989000/0x0/0x4ffc00000, data 0x1dc82a7/0x1e82000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 15712256 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:48.771727+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9daad000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9daad000 session 0x560c9d291c20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 15712256 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187648 data_alloc: 218103808 data_used: 286720
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:49.771840+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d20f860
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9aa9f800 session 0x560c9b88a1e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 15376384 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab7400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:50.771964+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 15278080 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:51.772058+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:52.772167+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:53.772298+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251296 data_alloc: 218103808 data_used: 8495104
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:54.772438+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:55.772565+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:56.772717+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:57.772812+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:58.772939+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251296 data_alloc: 218103808 data_used: 8495104
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:59.773053+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:00.773183+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.162794113s of 18.220115662s, submitted: 57
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:01.773280+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 1507328 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:02.773411+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 1064960 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:03.773542+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 1064960 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322664 data_alloc: 218103808 data_used: 9072640
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:04.773674+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fe8000/0x0/0x4ffc00000, data 0x25c8289/0x2684000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 1064960 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:05.773855+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 966656 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:06.773987+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 966656 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:07.774120+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 966656 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:08.774265+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 966656 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322664 data_alloc: 218103808 data_used: 9072640
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:09.774385+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fe8000/0x0/0x4ffc00000, data 0x25c8289/0x2684000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102842368 unmapped: 933888 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:10.774516+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102842368 unmapped: 933888 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:11.774644+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102842368 unmapped: 933888 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:12.774774+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fe8000/0x0/0x4ffc00000, data 0x25c8289/0x2684000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:13.774907+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322664 data_alloc: 218103808 data_used: 9072640
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:14.775359+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:15.775512+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:16.775631+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fe8000/0x0/0x4ffc00000, data 0x25c8289/0x2684000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:17.775729+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:18.775842+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322664 data_alloc: 218103808 data_used: 9072640
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:19.775951+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab0e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.730630875s of 18.770584106s, submitted: 77
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab0e400 session 0x560c9d815680
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101392384 unmapped: 2383872 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9b9781e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:20.776050+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58d800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58d800 session 0x560c9b8872c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58d800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58d800 session 0x560c9cb93a40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9df7dc20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9aaf50e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab0e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab0e400 session 0x560c9df7da40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:21.776159+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8575000/0x0/0x4ffc00000, data 0x303b289/0x30f7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:22.776304+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8575000/0x0/0x4ffc00000, data 0x303b289/0x30f7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8575000/0x0/0x4ffc00000, data 0x303b289/0x30f7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:23.776424+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9d20d0e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393552 data_alloc: 218103808 data_used: 9076736
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:24.776559+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9a89cb40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:25.776680+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9dae4f00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9cd65a40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101515264 unmapped: 13811712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:26.776846+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8575000/0x0/0x4ffc00000, data 0x303b289/0x30f7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab0e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58d800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101515264 unmapped: 13811712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:27.776986+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 9478144 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:28.777156+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 4595712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464002 data_alloc: 234881024 data_used: 18825216
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:29.777278+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8550000/0x0/0x4ffc00000, data 0x305f299/0x311c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 4595712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:30.777401+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 4595712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:31.777497+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 4595712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:32.777634+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.763611794s of 12.791978836s, submitted: 22
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 4505600 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:33.777744+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 4505600 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464474 data_alloc: 234881024 data_used: 18825216
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:34.777851+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f854e000/0x0/0x4ffc00000, data 0x3060299/0x311d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 4472832 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:35.777971+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 4472832 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:36.778094+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 4440064 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:37.778200+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 2662400 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:38.778352+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79cb000/0x0/0x4ffc00000, data 0x3be3299/0x3ca0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566700 data_alloc: 234881024 data_used: 19333120
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:39.778584+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:40.778728+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:41.778817+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79cb000/0x0/0x4ffc00000, data 0x3be3299/0x3ca0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:42.778917+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:43.779025+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566700 data_alloc: 234881024 data_used: 19333120
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:44.779153+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.381125450s of 12.448619843s, submitted: 102
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:45.779264+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 5226496 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:46.779359+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f75bc000/0x0/0x4ffc00000, data 0x3be3299/0x3ca0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 5226496 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:47.779454+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab0e400 session 0x560c9d8150e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58d800 session 0x560c9d8152c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109666304 unmapped: 10952704 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9cb92000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:48.779587+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330151 data_alloc: 218103808 data_used: 9076736
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:49.779777+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:50.779938+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8bd7000/0x0/0x4ffc00000, data 0x25c9289/0x2685000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:51.780069+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:52.780146+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:53.780277+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8bd7000/0x0/0x4ffc00000, data 0x25c9289/0x2685000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330151 data_alloc: 218103808 data_used: 9076736
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:54.780377+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:55.780570+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:56.780716+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d290780
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9dab7400 session 0x560c9df7cd20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8bd7000/0x0/0x4ffc00000, data 0x25c9289/0x2685000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.680329323s of 11.851483345s, submitted: 377
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9aaf41e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:57.780824+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:58.780967+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150904 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:59.781087+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:00.781211+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:01.781343+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:02.781504+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:03.781616+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150904 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:04.781724+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:05.781838+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:06.781961+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:07.782068+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:08.782157+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150904 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:09.782261+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:10.782376+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:11.782476+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:12.782582+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab0e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.428812027s of 15.439086914s, submitted: 20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab0e400 session 0x560c9d645e00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9dae61e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d738b40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d5dc1e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab7400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9dab7400 session 0x560c9ab55a40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 24510464 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:13.782736+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f961e000/0x0/0x4ffc00000, data 0x1b84269/0x1c3e000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 24510464 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198552 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:14.782862+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 24510464 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9d739a40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:15.782989+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d645680
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9cf7a960
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9a89dc20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 25321472 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:16.784550+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab7400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 25305088 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:17.784745+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:18.784905+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1b84279/0x1c3f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240190 data_alloc: 218103808 data_used: 6066176
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:19.785049+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:20.785165+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:21.785278+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:22.785375+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:23.785476+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1b84279/0x1c3f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240190 data_alloc: 218103808 data_used: 6066176
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:24.785577+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:25.785698+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 23937024 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:26.785790+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.976077080s of 13.989899635s, submitted: 12
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1b84279/0x1c3f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 19996672 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d0a000/0x0/0x4ffc00000, data 0x2497279/0x2552000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:27.785873+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 21733376 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:28.786039+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 21733376 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312780 data_alloc: 218103808 data_used: 6541312
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:29.786174+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:30.786285+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:31.786395+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:32.786489+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:33.786604+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312780 data_alloc: 218103808 data_used: 6541312
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:34.786717+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:35.786824+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:36.786995+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:37.787154+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:38.787247+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312780 data_alloc: 218103808 data_used: 6541312
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:39.787406+0000)
Oct 09 10:05:06 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3781638129' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:40.787526+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:41.787632+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:42.787733+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:43.787855+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312932 data_alloc: 218103808 data_used: 6545408
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:44.787961+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:45.788080+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:46.788178+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:47.788278+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9b888d20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9dab7400 session 0x560c9da645a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:48.788371+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.863149643s of 21.914012909s, submitted: 80
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9dae5c20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160649 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:49.788460+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:50.788558+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:51.788738+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:52.788844+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:53.789149+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:54.789263+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160649 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:55.789429+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:56.789541+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:57.789732+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:58.789832+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:59.790011+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160649 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:00.790114+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:01.790277+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:02.790448+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:03.790554+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:04.790748+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160649 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:05.790906+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:06.791051+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:07.791167+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:08.791275+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.242803574s of 20.254222870s, submitted: 18
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d814960
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b88be00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9db541e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58d800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58d800 session 0x560c9d209680
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9df7c780
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:09.791412+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213863 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:10.791546+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:11.791678+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:12.791843+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:13.791937+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:14.792063+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213863 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:15.792189+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:16.792321+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:17.792454+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:18.792574+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.318160057s of 10.331529617s, submitted: 11
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9cecfc20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:19.792665+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 28049408 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217388 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:20.792736+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103317504 unmapped: 27803648 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:21.792888+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105193472 unmapped: 25927680 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:22.792995+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105193472 unmapped: 25927680 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:23.793093+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105193472 unmapped: 25927680 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:24.793236+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260232 data_alloc: 218103808 data_used: 6615040
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9481000/0x0/0x4ffc00000, data 0x1d21269/0x1ddb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:25.793416+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:26.793544+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:27.793641+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:28.793770+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9481000/0x0/0x4ffc00000, data 0x1d21269/0x1ddb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:29.793858+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260232 data_alloc: 218103808 data_used: 6615040
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.570923805s of 10.576161385s, submitted: 7
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:30.793996+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 22528000 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:31.794084+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 22650880 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:32.794211+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 22650880 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d40000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:33.794313+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 22650880 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:34.794441+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 22642688 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330668 data_alloc: 218103808 data_used: 7471104
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:35.794581+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 22642688 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:36.794672+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 22642688 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:37.794781+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 22642688 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d40000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:38.794920+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 22634496 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:39.795013+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 22634496 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330668 data_alloc: 218103808 data_used: 7471104
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:40.795106+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 22634496 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:41.795201+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 22626304 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d40000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:42.795341+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 22626304 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d40000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:43.795481+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 22626304 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:44.795578+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 22626304 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330668 data_alloc: 218103808 data_used: 7471104
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:45.795763+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d1d05a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d8503c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee000 session 0x560c9b474000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 22618112 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d20e1e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.934541702s of 15.981528282s, submitted: 74
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d81cd20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9dac5e00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d2092c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee400 session 0x560c9d5dc780
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee400 session 0x560c9cd65a40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:46.795890+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109633536 unmapped: 25165824 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2bb6279/0x2c71000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:47.796030+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109633536 unmapped: 25165824 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:48.796193+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:49.796334+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385271 data_alloc: 218103808 data_used: 7471104
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:50.796445+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9b978780
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:51.796552+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2bb6279/0x2c71000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d1d0780
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:52.796679+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d738f00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d644960
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:53.796844+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109625344 unmapped: 25174016 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:54.796991+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 25329664 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386529 data_alloc: 218103808 data_used: 7475200
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:55.797135+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 21127168 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f85ea000/0x0/0x4ffc00000, data 0x2bb6289/0x2c72000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:56.797243+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:57.797387+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:58.797498+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:59.797614+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431673 data_alloc: 234881024 data_used: 14135296
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:00.797804+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:01.797989+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f85ea000/0x0/0x4ffc00000, data 0x2bb6289/0x2c72000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 21061632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:02.798103+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 21061632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:03.798253+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 21061632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.221870422s of 18.247957230s, submitted: 23
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:04.798410+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115064832 unmapped: 19734528 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1462623 data_alloc: 234881024 data_used: 14249984
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:05.798565+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 19611648 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f81e5000/0x0/0x4ffc00000, data 0x2fba289/0x3076000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:06.798681+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 19611648 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:07.798863+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 19611648 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f81e5000/0x0/0x4ffc00000, data 0x2fba289/0x3076000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f81e5000/0x0/0x4ffc00000, data 0x2fba289/0x3076000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:08.799042+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 19578880 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:09.799185+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f81e5000/0x0/0x4ffc00000, data 0x2fba289/0x3076000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 19546112 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466451 data_alloc: 234881024 data_used: 14245888
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:10.799331+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 19546112 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:11.799477+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 19546112 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d2083c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9da65a40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d81d0e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:12.799648+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 22200320 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:13.799818+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 22200320 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:14.799974+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 22200320 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335504 data_alloc: 218103808 data_used: 7471104
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d5a000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9dae50e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.919174194s of 10.969374657s, submitted: 59
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9dac54a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d814000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:15.800158+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:16.800298+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:17.800439+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:18.800614+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:19.800798+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180974 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:20.800948+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:21.801111+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:22.801254+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:23.801367+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:24.801536+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180974 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:25.801738+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:26.801900+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:27.802049+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:28.802187+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:29.802299+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180974 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:30.802427+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:31.802552+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:32.802710+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.410537720s of 18.433294296s, submitted: 33
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9da652c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:33.802808+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [1])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d20c3c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 27156480 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9da650e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9db17a40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d81c780
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d5dd2c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9db57e00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:34.802946+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267502 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:35.803188+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:36.803373+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:37.803516+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:38.803723+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d7383c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc3000/0x0/0x4ffc00000, data 0x21df269/0x2299000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:39.803859+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267502 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee400 session 0x560c9d81c3c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9da65860
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d2dc960
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:40.804000+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 35389440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:41.804091+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 35389440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:42.804229+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 30842880 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:43.804330+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 30842880 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:44.804462+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 30842880 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345620 data_alloc: 234881024 data_used: 11042816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:45.804583+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 30842880 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:46.804682+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 30834688 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:47.804823+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 30834688 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:48.804965+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 30834688 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:49.805125+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 30834688 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345620 data_alloc: 234881024 data_used: 11042816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:50.805273+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111853568 unmapped: 30826496 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:51.805423+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.394321442s of 18.419612885s, submitted: 19
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111419392 unmapped: 31260672 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:52.805585+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 27025408 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:53.805723+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 27025408 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:54.805885+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 27025408 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416066 data_alloc: 234881024 data_used: 11665408
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:55.806055+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 26927104 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:56.806191+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 26927104 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:57.806328+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 26927104 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:58.806466+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:59.806608+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416082 data_alloc: 234881024 data_used: 11665408
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:00.806727+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:01.806893+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:02.807039+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:03.807146+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:04.807272+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416082 data_alloc: 234881024 data_used: 11665408
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:05.807422+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:06.807552+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 26910720 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:07.807679+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 26910720 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:08.807822+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 26910720 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:09.807976+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26902528 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416082 data_alloc: 234881024 data_used: 11665408
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:10.808115+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26902528 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:11.808216+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26902528 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9db57c20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4eec00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4eec00 session 0x560c9cd65a40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef000 session 0x560c9d5dc780
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef000 session 0x560c9d209680
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.403728485s of 20.452980042s, submitted: 60
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d2092c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dac4d20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4eec00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4eec00 session 0x560c9da64d20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef400 session 0x560c9a89d680
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9db563c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:12.808374+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:13.808548+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:14.808720+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f84d7000/0x0/0x4ffc00000, data 0x2cc9289/0x2d85000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448246 data_alloc: 234881024 data_used: 11665408
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:15.808898+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9a89de00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:16.809007+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4eec00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4eec00 session 0x560c9d81d4a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef000 session 0x560c9db572c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef800 session 0x560c9dac52c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:17.809110+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114900992 unmapped: 27779072 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:18.809206+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 26091520 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:19.809311+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471645 data_alloc: 234881024 data_used: 14168064
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f84b1000/0x0/0x4ffc00000, data 0x2ced2bc/0x2dab000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:20.809453+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:21.809563+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f84b1000/0x0/0x4ffc00000, data 0x2ced2bc/0x2dab000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:22.809663+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:23.809793+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:24.809932+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 26034176 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471645 data_alloc: 234881024 data_used: 14168064
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:25.810067+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 26034176 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:26.810184+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 26034176 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f84b1000/0x0/0x4ffc00000, data 0x2ced2bc/0x2dab000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:27.810348+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.588661194s of 15.604912758s, submitted: 23
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 23617536 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:28.810451+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 22052864 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:29.810601+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 23511040 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1557595 data_alloc: 234881024 data_used: 15020032
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:30.810726+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 23511040 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:31.810840+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 23511040 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:32.810938+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 23511040 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79f9000/0x0/0x4ffc00000, data 0x37a52bc/0x3863000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:33.811046+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 23502848 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79f9000/0x0/0x4ffc00000, data 0x37a52bc/0x3863000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:34.811188+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 23216128 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558307 data_alloc: 234881024 data_used: 15020032
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:35.811313+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 23216128 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:36.811426+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 23216128 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:37.811579+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 23216128 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79d8000/0x0/0x4ffc00000, data 0x37c62bc/0x3884000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:38.811710+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119472128 unmapped: 23207936 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:39.811800+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119472128 unmapped: 23207936 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558307 data_alloc: 234881024 data_used: 15020032
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.588848114s of 12.664453506s, submitted: 125
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:40.811928+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:41.812081+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:42.812199+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:43.812312+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79cd000/0x0/0x4ffc00000, data 0x37d12bc/0x388f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:44.812410+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558307 data_alloc: 234881024 data_used: 15020032
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:45.812528+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:46.812678+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:47.812875+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:48.813046+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:49.813210+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558227 data_alloc: 234881024 data_used: 15020032
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:50.813373+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:51.813497+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:52.813612+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:53.813723+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:54.813888+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558227 data_alloc: 234881024 data_used: 15020032
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:55.814041+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:56.814179+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:57.814302+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119537664 unmapped: 23142400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:58.814467+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119537664 unmapped: 23142400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:59.814719+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.197729111s of 19.202938080s, submitted: 5
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119554048 unmapped: 23126016 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558731 data_alloc: 234881024 data_used: 15020032
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:00.814849+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 23117824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:01.814970+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 23117824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:02.815103+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 23117824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:03.815234+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:04.815372+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558731 data_alloc: 234881024 data_used: 15020032
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:05.815538+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:06.815664+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:07.815816+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:08.816028+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:09.816170+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558731 data_alloc: 234881024 data_used: 15020032
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:10.816304+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:11.816459+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:12.816643+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:13.816842+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:14.816988+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558731 data_alloc: 234881024 data_used: 15020032
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:15.817156+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef800 session 0x560c9db554a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9cf7b4a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:16.817271+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.945161819s of 16.949874878s, submitted: 5
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d5dc1e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:17.817377+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8797000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:18.817499+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:19.817608+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422534 data_alloc: 234881024 data_used: 11665408
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:20.817737+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:21.817880+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8797000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9aaf41e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d208b40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d644960
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:22.817983+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:23.818119+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:24.818252+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202591 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:25.818391+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:26.818537+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:27.818682+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:28.818841+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:29.818987+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202591 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:30.819114+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:31.819245+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:32.819399+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:33.819503+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:34.819641+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202591 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:35.819800+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:36.819951+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:37.820091+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:38.820231+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.212566376s of 22.240785599s, submitted: 46
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9db56960
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d645e00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d20c3c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef800 session 0x560c9d644d20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9db17c20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9239000/0x0/0x4ffc00000, data 0x1f69269/0x2023000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:39.820374+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279011 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:40.820531+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9cf7a3c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d5dc960
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:41.820701+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dac4000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef800 session 0x560c9dac5a40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:42.820851+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:43.820999+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 31350784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:44.821097+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 29229056 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337825 data_alloc: 218103808 data_used: 8757248
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9238000/0x0/0x4ffc00000, data 0x1f69279/0x2024000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:45.821238+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 29138944 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:46.821574+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 29138944 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9cf7af00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b8892c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:47.821672+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d20d860
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:48.821807+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:49.821944+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210448 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:50.822076+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:51.822335+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:52.822447+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:53.822604+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:54.822747+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210448 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:55.822903+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:56.823072+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:57.823228+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.394531250s of 19.422958374s, submitted: 34
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d630d20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4eec00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4eec00 session 0x560c9dac5e00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9db55a40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d1d0d20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d645860
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:58.823344+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:59.823464+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288735 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:00.823629+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d5dda40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef000 session 0x560c9d5ddc20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:01.823807+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9247000/0x0/0x4ffc00000, data 0x1f5b269/0x2015000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9db16f00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9cf7a000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:02.823941+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9247000/0x0/0x4ffc00000, data 0x1f5b269/0x2015000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:03.824069+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 28672000 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:04.824190+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9a89d4a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9b906b40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1f5b279/0x2016000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 28672000 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356453 data_alloc: 234881024 data_used: 10121216
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4efc00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4efc00 session 0x560c9d20f860
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:05.824308+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:06.824419+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:07.824558+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:08.824719+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:09.824899+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219082 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:10.825099+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:11.825288+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:12.825420+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:13.825819+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:14.825977+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219082 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:15.826144+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:16.826275+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:17.826409+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:18.826555+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:19.826663+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219082 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:20.826789+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:21.826963+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9cece000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d20fe00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d20e1e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:22.827078+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9b88a1e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.325824738s of 24.373125076s, submitted: 57
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9dab3000 session 0x560c9b88b0e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9b88ad20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d814000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d815e00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9aaf41e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 36405248 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:23.827191+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 36405248 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:24.827316+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3c00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3c00 session 0x560c9d645e00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 36405248 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318042 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9dac4000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:25.827431+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x22e2269/0x239c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d5dc960
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9b8892c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 36397056 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:26.827561+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a2800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 36397056 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:27.827721+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:28.827854+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:29.827988+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401186 data_alloc: 234881024 data_used: 12648448
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:30.828132+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x22e2269/0x239c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:31.828267+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:32.828455+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:33.828611+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:34.828733+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x22e2269/0x239c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401186 data_alloc: 234881024 data_used: 12648448
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:35.828920+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:36.829090+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 33054720 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.557071686s of 14.579683304s, submitted: 17
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:37.829205+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 26443776 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:38.829350+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 25993216 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:39.829475+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8595000/0x0/0x4ffc00000, data 0x2c04269/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 25993216 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488802 data_alloc: 234881024 data_used: 13692928
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:40.829617+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 25903104 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:41.829755+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 25903104 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:42.829885+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8595000/0x0/0x4ffc00000, data 0x2c04269/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 25870336 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:43.830055+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:44.830198+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483322 data_alloc: 234881024 data_used: 13692928
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:45.830407+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:46.830597+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:47.830784+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:48.830964+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.212394714s of 11.282471657s, submitted: 85
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f859b000/0x0/0x4ffc00000, data 0x2c07269/0x2cc1000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dac5e00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a2800 session 0x560c9b906960
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d2090e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:49.831138+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232474 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:50.831338+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:51.831502+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:52.831741+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:53.831981+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:54.832170+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232474 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:55.832648+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:56.832837+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:57.833044+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:58.833248+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:59.833459+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232474 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:00.833635+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:01.833830+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:02.833996+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:03.834194+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:04.834352+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232474 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:05.834537+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a2800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.065891266s of 17.081371307s, submitted: 23
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a2800 session 0x560c9d5b7680
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d208000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9b889a40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d738f00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9aaf41e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:06.834760+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:07.834917+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x20262cb/0x20e1000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:08.835077+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:09.835227+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a2800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a2800 session 0x560c9dac52c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315745 data_alloc: 218103808 data_used: 290816
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:10.835390+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:11.835555+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 31391744 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:12.835718+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x20262cb/0x20e1000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:13.835832+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:14.835951+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385209 data_alloc: 234881024 data_used: 10616832
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:15.836080+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:16.836202+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:17.836302+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x20262cb/0x20e1000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.787096977s of 12.822224617s, submitted: 36
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dae7860
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807c00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9dac43c0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:18.836398+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d2ddc20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d20d4a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807c00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9db545a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115589120 unmapped: 30769152 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:19.836498+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115589120 unmapped: 30769152 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429577 data_alloc: 234881024 data_used: 10629120
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:20.836591+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9cece000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 24887296 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:21.836727+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a2800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a2800 session 0x560c9cecfe00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f7fb7000/0x0/0x4ffc00000, data 0x31ea2cb/0x32a5000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d5dd0e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d5dd4a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 23437312 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:22.836847+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807c00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 23429120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:23.836955+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126525440 unmapped: 19832832 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:24.837088+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126631936 unmapped: 19726336 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1565980 data_alloc: 234881024 data_used: 16048128
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:25.837228+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:26.837378+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f7fb6000/0x0/0x4ffc00000, data 0x31ea2db/0x32a6000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:27.837510+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:28.837622+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:29.837759+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1565204 data_alloc: 234881024 data_used: 16052224
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:30.837854+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:31.837956+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:32.838057+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f7f95000/0x0/0x4ffc00000, data 0x320b2db/0x32c7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.480135918s of 14.569817543s, submitted: 128
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127311872 unmapped: 19046400 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:33.838257+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 18563072 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:34.838388+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 18563072 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616698 data_alloc: 234881024 data_used: 16429056
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:35.838524+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 18563072 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:36.838681+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 18563072 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79f7000/0x0/0x4ffc00000, data 0x37a12db/0x385d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:37.838818+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127811584 unmapped: 18546688 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:38.838916+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127811584 unmapped: 18546688 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:39.839040+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 18399232 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1612226 data_alloc: 234881024 data_used: 16429056
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:40.839140+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79e0000/0x0/0x4ffc00000, data 0x37c02db/0x387c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 18399232 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:41.839240+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 18391040 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:42.839381+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 18391040 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:43.839484+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 18391040 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79dd000/0x0/0x4ffc00000, data 0x37c32db/0x387f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:44.839621+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 18391040 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1612618 data_alloc: 234881024 data_used: 16429056
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:45.839788+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79dd000/0x0/0x4ffc00000, data 0x37c32db/0x387f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.750670433s of 12.807613373s, submitted: 72
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 128081920 unmapped: 18276352 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:46.839949+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 128081920 unmapped: 18276352 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:47.840056+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 128081920 unmapped: 18276352 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:48.840188+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d5dd680
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9d6441e0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79cf000/0x0/0x4ffc00000, data 0x37d12db/0x388d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab01000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab01000 session 0x560c9b88b860
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 21946368 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:49.840319+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 21946368 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1486899 data_alloc: 234881024 data_used: 10125312
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:50.840444+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 21946368 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:51.840573+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 21946368 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:52.840669+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b474960
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9a89cd20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d20c000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:53.840741+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:54.840982+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253053 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:55.841182+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:56.841325+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:57.841447+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:58.841556+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:59.841662+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253053 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:00.841723+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:01.841824+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:02.841918+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:03.842034+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:04.842167+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253053 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:05.842309+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:06.842445+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:07.842549+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:08.842683+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:09.842832+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253053 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:10.842940+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807c00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.960874557s of 25.003011703s, submitted: 65
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9db57c20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab01000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab01000 session 0x560c9b888f00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9aaf45a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9cecfa40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807c00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9dac45a0
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:11.843080+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29941760 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:12.843196+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29941760 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab01000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab01000 session 0x560c9b474d20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18ec269/0x19a6000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9cd65a40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:13.843296+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29941760 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dac5c20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d1d1a40
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807c00
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab01000
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:14.843411+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 29630464 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:15.843546+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306517 data_alloc: 218103808 data_used: 3018752
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:16.843681+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:17.843811+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:18.843936+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9891000/0x0/0x4ffc00000, data 0x1910279/0x19cb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9891000/0x0/0x4ffc00000, data 0x1910279/0x19cb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:19.844727+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:20.845005+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306517 data_alloc: 218103808 data_used: 3018752
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9891000/0x0/0x4ffc00000, data 0x1910279/0x19cb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:21.845098+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:22.845194+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:23.845305+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.842787743s of 12.852742195s, submitted: 11
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:24.845401+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8e2a000/0x0/0x4ffc00000, data 0x236b279/0x2426000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 123723776 unmapped: 22634496 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:25.845507+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390535 data_alloc: 218103808 data_used: 3342336
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:26.845612+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:27.845721+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:28.845862+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:29.845971+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8e07000/0x0/0x4ffc00000, data 0x2381279/0x243c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:30.846089+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390551 data_alloc: 218103808 data_used: 3342336
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:31.846197+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:32.846306+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8e07000/0x0/0x4ffc00000, data 0x2381279/0x243c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:33.846440+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:34.846606+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9d5dcd20
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab01000 session 0x560c9ab54780
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.491474152s of 11.555577278s, submitted: 110
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9b888960
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:35.846819+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:36.846937+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:37.847063+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:38.847199+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:39.847330+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:40.847448+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:41.847561+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:42.847714+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:43.847863+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:44.848000+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:45.848169+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:46.848308+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:47.848421+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:48.848554+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:49.848667+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:50.848808+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:51.848945+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:52.849046+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:53.849188+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:54.849346+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:55.849493+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:56.849632+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:57.849726+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:58.849862+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:59.849993+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:00.850122+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:01.850232+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:02.850340+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:03.850448+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:04.850573+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:05.850718+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:06.851789+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:07.851893+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:08.852030+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:09.852163+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:10.852305+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:11.852533+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:12.852664+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:13.852835+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:14.852976+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:15.853137+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:16.853251+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:17.853348+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:18.853503+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:19.853642+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:20.853759+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 1800.0 total, 600.0 interval
                                          Cumulative writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                          Cumulative WAL: 12K writes, 3773 syncs, 3.36 writes per sync, written: 0.03 GB, 0.02 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 3402 writes, 12K keys, 3402 commit groups, 1.0 writes per commit group, ingest: 13.97 MB, 0.02 MB/s
                                          Interval WAL: 3402 writes, 1492 syncs, 2.28 writes per sync, written: 0.01 GB, 0.02 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:21.853886+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:22.853994+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:23.854132+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:24.854245+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:25.854379+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:26.854492+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:27.854596+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:28.854708+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:29.854855+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:30.855010+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:31.855117+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:32.855221+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: do_command 'config diff' '{prefix=config diff}'
Oct 09 10:05:06 compute-1 ceph-osd[7514]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:33.855320+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: do_command 'config show' '{prefix=config show}'
Oct 09 10:05:06 compute-1 ceph-osd[7514]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 09 10:05:06 compute-1 ceph-osd[7514]: do_command 'counter dump' '{prefix=counter dump}'
Oct 09 10:05:06 compute-1 ceph-osd[7514]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 29810688 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: do_command 'counter schema' '{prefix=counter schema}'
Oct 09 10:05:06 compute-1 ceph-osd[7514]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:34.855422+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 29843456 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:35.855549+0000)
Oct 09 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:05:06 compute-1 ceph-osd[7514]: do_command 'log dump' '{prefix=log dump}'
Oct 09 10:05:06 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 09 10:05:06 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3763114863' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.17130 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.26701 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.26975 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.27002 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.26725 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1164563443' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/62328478' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2194491156' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.26752 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.26755 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/670162803' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1482807627' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.17202 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.26782 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.27050 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1284909676' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3781638129' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.17229 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.26806 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3763114863' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:05:06 compute-1 ceph-mon[9795]: from='client.27068 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:06.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:06 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 09 10:05:06 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3798516040' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 09 10:05:07 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1096470766' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 09 10:05:07 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/754514562' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:05:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:07.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:07 compute-1 crontab[175350]: (root) LIST (root)
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2569366303' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.17256 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2593566875' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2884084467' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.26836 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.27092 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3798516040' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.17277 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/539467632' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.26863 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.27122 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1096470766' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/754514562' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.17304 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.27140 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.27146 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:07 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3580093067' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:05:07 compute-1 nova_compute[162974]: 2025-10-09 10:05:07.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:07 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 09 10:05:07 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4220218664' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 09 10:05:08 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3105571077' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.26908 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.27161 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/68246502' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.27173 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2236945637' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/4220218664' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.26941 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.27194 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/933132204' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.26956 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3105571077' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/911082628' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.26971 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.27221 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2179929761' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:05:08 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 09 10:05:08 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/864826343' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:05:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:08.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:08 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct 09 10:05:08 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3954604759' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 09 10:05:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023508330' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 09 10:05:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4228889836' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct 09 10:05:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1943906294' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:05:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:09.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 09 10:05:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470775445' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 09 10:05:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2923247886' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/864826343' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.17385 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1308911326' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3954604759' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/567861671' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2020451013' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.17409 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3023508330' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/4228889836' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/702201220' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1411436454' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1943906294' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.17439 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: pgmap v964: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/470775445' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2923247886' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3524500067' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 09 10:05:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3035430115' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:05:09 compute-1 podman[175674]: 2025-10-09 10:05:09.593357576 +0000 UTC m=+0.098302551 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 09 10:05:09 compute-1 podman[175676]: 2025-10-09 10:05:09.627257455 +0000 UTC m=+0.126215264 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 10:05:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct 09 10:05:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3165839660' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:05:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:05:10.043 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:05:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:05:10.044 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:05:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:05:10.045 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:05:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 09 10:05:10 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3566093463' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 09 10:05:10 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3564578496' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 09 10:05:10 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1794939133' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3035430115' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/4143261129' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.17460 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3165839660' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1467015024' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/884900180' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/991276937' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2695539313' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1217789133' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1984048702' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2624478517' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3566093463' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1673871356' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3564578496' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/705132865' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4032395893' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/212958408' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:05:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1794939133' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:05:10 compute-1 nova_compute[162974]: 2025-10-09 10:05:10.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:05:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:10.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:05:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct 09 10:05:10 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4152597779' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-1 systemd[1]: Starting Hostname Service...
Oct 09 10:05:11 compute-1 systemd[1]: Started Hostname Service.
Oct 09 10:05:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:11.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1464335686' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/310330597' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.27169 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/15960154' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1550517676' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.27178 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2451567545' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/4152597779' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.27199 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3056944751' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1744172385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.27440 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.27428 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.27446 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: pgmap v965: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:11 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1183017444' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct 09 10:05:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2255613207' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 09 10:05:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1103120249' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 09 10:05:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1103120249' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:05:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct 09 10:05:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4038948101' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct 09 10:05:12 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4186151168' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.27244 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2784783823' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.27470 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2255613207' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1760469970' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.27265 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/1103120249' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/1103120249' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/4038948101' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.27283 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1049596796' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/499448165' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.27298 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.17691 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.17682 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/4186151168' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:05:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/225349446' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:12.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:12 compute-1 nova_compute[162974]: 2025-10-09 10:05:12.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:12 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 09 10:05:12 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2339899425' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:13.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:13 compute-1 sudo[176405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:05:13 compute-1 sudo[176405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:05:13 compute-1 sudo[176405]: pam_unix(sudo:session): session closed for user root
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.17706 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.27548 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3642453026' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.17718 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1009466217' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.17724 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.27572 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/4150804299' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.27361 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2339899425' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.17748 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.27605 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.27391 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-1 ceph-mon[9795]: pgmap v966: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/557993267' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.17808 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2364528424' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:05:13 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 09 10:05:13 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/590546173' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:05:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 09 10:05:14 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/128357852' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:05:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct 09 10:05:14 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/521439099' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:05:14 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/590546173' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:05:14 compute-1 ceph-mon[9795]: from='client.17829 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:14 compute-1 ceph-mon[9795]: from='client.27457 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:14 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3059809898' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:05:14 compute-1 ceph-mon[9795]: from='client.27689 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:14 compute-1 ceph-mon[9795]: from='client.17847 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:14 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2788345293' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:05:14 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/128357852' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:05:14 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3101704029' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:14 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3255535698' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:05:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:14.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct 09 10:05:14 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1194589566' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:05:14 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:14 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Oct 09 10:05:15 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/704435951' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 09 10:05:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:15.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:15 compute-1 nova_compute[162974]: 2025-10-09 10:05:15.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='client.17865 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/521439099' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2953610832' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2029825744' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1194589566' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='client.17883 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2696534015' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/704435951' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 09 10:05:15 compute-1 ceph-mon[9795]: pgmap v967: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2053053048' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:05:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:05:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 1800.0 total, 600.0 interval
                                          Cumulative writes: 5405 writes, 28K keys, 5405 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.04 MB/s
                                          Cumulative WAL: 5405 writes, 5405 syncs, 1.00 writes per sync, written: 0.07 GB, 0.04 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 1510 writes, 7582 keys, 1510 commit groups, 1.0 writes per commit group, ingest: 17.37 MB, 0.03 MB/s
                                          Interval WAL: 1510 writes, 1510 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    409.5      0.11              0.07        15    0.007       0      0       0.0       0.0
                                            L6      1/0   13.47 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    423.3    363.0      0.49              0.27        14    0.035     72K   7333       0.0       0.0
                                           Sum      1/0   13.47 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1    348.7    371.2      0.60              0.35        29    0.021     72K   7333       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0    337.4    344.0      0.22              0.13        10    0.022     30K   2536       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    423.3    363.0      0.49              0.27        14    0.035     72K   7333       0.0       0.0
                                          High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    412.5      0.10              0.07        14    0.007       0      0       0.0       0.0
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1800.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.042, interval 0.010
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.22 GB write, 0.12 MB/s write, 0.20 GB read, 0.12 MB/s read, 0.6 seconds
                                          Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x55e4b55c29b0#2 capacity: 304.00 MB usage: 17.44 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.5e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1144,16.86 MB,5.54594%) FilterBlock(29,216.98 KB,0.0697036%) IndexBlock(29,378.44 KB,0.121568%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
Oct 09 10:05:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct 09 10:05:15 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2862235529' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 09 10:05:16 compute-1 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 09 10:05:16 compute-1 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 09 10:05:16 compute-1 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 09 10:05:16 compute-1 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 09 10:05:16 compute-1 kernel: cfg80211: failed to load regulatory.db
Oct 09 10:05:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:16.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:16 compute-1 ceph-mon[9795]: from='client.27550 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:16 compute-1 ceph-mon[9795]: from='client.27788 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:16 compute-1 ceph-mon[9795]: from='client.17928 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:16 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3292770511' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 09 10:05:16 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2862235529' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 09 10:05:16 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1329928561' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:05:16 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1059604019' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 09 10:05:16 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1855735972' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 09 10:05:16 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4274574662' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:05:16 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Oct 09 10:05:16 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3952041900' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 09 10:05:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:17.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:17 compute-1 ceph-mon[9795]: from='client.27833 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:17 compute-1 ceph-mon[9795]: from='client.27592 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:17 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1952453794' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:05:17 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3952041900' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 09 10:05:17 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3703601036' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 09 10:05:17 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1880054577' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 09 10:05:17 compute-1 ceph-mon[9795]: from='client.27857 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:17 compute-1 ceph-mon[9795]: pgmap v968: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:17 compute-1 ceph-mon[9795]: from='client.27625 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:17 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Oct 09 10:05:17 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2510247055' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 09 10:05:17 compute-1 nova_compute[162974]: 2025-10-09 10:05:17.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:18 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Oct 09 10:05:18 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1206458610' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 09 10:05:18 compute-1 ovs-appctl[178009]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 09 10:05:18 compute-1 ovs-appctl[178023]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 09 10:05:18 compute-1 ovs-appctl[178031]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 09 10:05:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:18.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:18 compute-1 ceph-mon[9795]: from='client.17988 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:18 compute-1 ceph-mon[9795]: from='client.27869 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:18 compute-1 ceph-mon[9795]: from='client.27643 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:18 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2510247055' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 09 10:05:18 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1139436647' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 09 10:05:18 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2006132426' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 09 10:05:18 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1206458610' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 09 10:05:18 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/260578882' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 09 10:05:18 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1268401104' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 09 10:05:19 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Oct 09 10:05:19 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2325890707' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 09 10:05:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:19.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:19 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Oct 09 10:05:19 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1843500510' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 09 10:05:19 compute-1 ceph-mon[9795]: from='client.27899 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:19 compute-1 ceph-mon[9795]: from='client.27905 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:19 compute-1 ceph-mon[9795]: from='client.18030 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:19 compute-1 ceph-mon[9795]: from='client.27914 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:19 compute-1 ceph-mon[9795]: from='client.27697 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:19 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2735777596' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 09 10:05:19 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2325890707' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 09 10:05:19 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/133937591' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 09 10:05:19 compute-1 ceph-mon[9795]: from='client.18051 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:19 compute-1 ceph-mon[9795]: pgmap v969: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:19 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1843500510' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 09 10:05:19 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1514504317' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 09 10:05:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 09 10:05:20 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2853944145' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:05:20 compute-1 nova_compute[162974]: 2025-10-09 10:05:20.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:20.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:20 compute-1 ceph-mon[9795]: from='client.18066 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:05:20 compute-1 ceph-mon[9795]: from='client.27959 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:20 compute-1 ceph-mon[9795]: from='client.27965 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:20 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/977502647' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 09 10:05:20 compute-1 ceph-mon[9795]: from='client.18084 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:20 compute-1 ceph-mon[9795]: from='client.27754 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:20 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1381470307' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 09 10:05:20 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2853944145' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:05:20 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3265058752' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:05:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Oct 09 10:05:20 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1420165825' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 09 10:05:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Oct 09 10:05:20 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3769982294' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:21.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:21 compute-1 podman[179214]: 2025-10-09 10:05:21.504790792 +0000 UTC m=+0.131492051 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 09 10:05:21 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Oct 09 10:05:21 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2700875887' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:21 compute-1 ceph-mon[9795]: from='client.18111 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:21 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1420165825' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 09 10:05:21 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1079050608' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 09 10:05:21 compute-1 ceph-mon[9795]: from='client.18126 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:21 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3769982294' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:21 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1461598608' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:21 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2681174729' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 09 10:05:21 compute-1 ceph-mon[9795]: from='client.28028 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:21 compute-1 ceph-mon[9795]: pgmap v970: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:21 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1497053247' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 09 10:05:21 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Oct 09 10:05:21 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1640101779' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:21 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Oct 09 10:05:21 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2969326191' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Oct 09 10:05:22 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/757521440' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:22.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:22 compute-1 ceph-mon[9795]: from='client.28034 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2700875887' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1640101779' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-1 ceph-mon[9795]: from='client.18165 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:22 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2969326191' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/697504947' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-1 ceph-mon[9795]: from='client.18177 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:22 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/757521440' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3640841864' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/101271543' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:05:22 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/363929979' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-1 nova_compute[162974]: 2025-10-09 10:05:22.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:23 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Oct 09 10:05:23 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3047035409' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:23.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:23 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/298067314' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/997421174' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 09 10:05:23 compute-1 ceph-mon[9795]: from='client.28097 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-1 ceph-mon[9795]: from='client.27871 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3180762565' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-1 ceph-mon[9795]: pgmap v971: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:23 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3047035409' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3260254795' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Oct 09 10:05:23 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2416067552' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Oct 09 10:05:24 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1831731513' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:24.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:24 compute-1 ceph-mon[9795]: from='client.18234 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2416067552' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/207342957' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3534344203' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-1 ceph-mon[9795]: from='client.28136 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-1 ceph-mon[9795]: from='client.27913 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1831731513' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2405584722' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/4266558429' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Oct 09 10:05:25 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3732170856' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:25.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Oct 09 10:05:25 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3379022171' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-1 nova_compute[162974]: 2025-10-09 10:05:25.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:25 compute-1 ceph-mon[9795]: from='client.28163 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3817141235' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-1 ceph-mon[9795]: from='client.27937 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-1 ceph-mon[9795]: from='client.28178 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1919356694' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-1 ceph-mon[9795]: from='client.27946 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3732170856' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-1 ceph-mon[9795]: pgmap v972: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:25 compute-1 ceph-mon[9795]: from='client.18315 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2843286844' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3379022171' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/426215262' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:26 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Oct 09 10:05:26 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3687400171' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:26.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:26 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Oct 09 10:05:26 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1096908375' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-1 ceph-mon[9795]: from='client.28214 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3632668552' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-1 ceph-mon[9795]: from='client.27985 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-1 ceph-mon[9795]: from='client.28223 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/183718478' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-1 ceph-mon[9795]: from='client.27997 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3687400171' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-1 ceph-mon[9795]: from='client.18360 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2278152414' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1096908375' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/213647537' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:27.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:27 compute-1 virtqemud[162526]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 09 10:05:27 compute-1 systemd[1]: Starting Time & Date Service...
Oct 09 10:05:27 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 09 10:05:27 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/248433905' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-1 systemd[1]: Started Time & Date Service.
Oct 09 10:05:27 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2375740516' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-1 ceph-mon[9795]: from='client.28268 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-1 ceph-mon[9795]: from='client.18387 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-1 ceph-mon[9795]: from='client.28274 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-1 ceph-mon[9795]: from='client.28280 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-1 ceph-mon[9795]: pgmap v973: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:27 compute-1 ceph-mon[9795]: from='client.18399 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2732266303' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/248433905' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-1 nova_compute[162974]: 2025-10-09 10:05:27.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:27 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Oct 09 10:05:27 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2772078955' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:28.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:28 compute-1 ceph-mon[9795]: from='client.28036 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:28 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2437616287' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:28 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3927175723' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:28 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2772078955' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:28 compute-1 ceph-mon[9795]: from='client.18429 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:28 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3010218255' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:29.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:29 compute-1 ceph-mon[9795]: from='client.18435 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:29 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3527154993' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:29 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1539775753' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:29 compute-1 ceph-mon[9795]: pgmap v974: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:29 compute-1 ceph-mon[9795]: from='client.18450 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 09 10:05:30 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2865180231' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:30 compute-1 nova_compute[162974]: 2025-10-09 10:05:30.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:30 compute-1 podman[180388]: 2025-10-09 10:05:30.52822809 +0000 UTC m=+0.037643077 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 09 10:05:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:30.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:30 compute-1 ceph-mon[9795]: from='client.18456 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:30 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2865180231' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:30 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4242952019' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:31.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:31 compute-1 ceph-mon[9795]: pgmap v975: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:32.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:32 compute-1 nova_compute[162974]: 2025-10-09 10:05:32.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:33.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:33 compute-1 sudo[180411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:05:33 compute-1 sudo[180411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:05:33 compute-1 sudo[180411]: pam_unix(sudo:session): session closed for user root
Oct 09 10:05:34 compute-1 ceph-mon[9795]: pgmap v976: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:34.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:05:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:35.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:35 compute-1 nova_compute[162974]: 2025-10-09 10:05:35.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:36 compute-1 ceph-mon[9795]: pgmap v977: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:36.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:37.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:37 compute-1 nova_compute[162974]: 2025-10-09 10:05:37.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:38 compute-1 ceph-mon[9795]: pgmap v978: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:38.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:39.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:40 compute-1 ceph-mon[9795]: pgmap v979: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:40 compute-1 nova_compute[162974]: 2025-10-09 10:05:40.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:40 compute-1 podman[180442]: 2025-10-09 10:05:40.540233377 +0000 UTC m=+0.047820943 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 09 10:05:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:40.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:40 compute-1 podman[180441]: 2025-10-09 10:05:40.56226963 +0000 UTC m=+0.071342987 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 09 10:05:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:41.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:42 compute-1 ceph-mon[9795]: pgmap v980: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:42.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:42 compute-1 nova_compute[162974]: 2025-10-09 10:05:42.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:43.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:43 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 09 10:05:44 compute-1 ceph-mon[9795]: pgmap v981: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:44.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:45.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:45 compute-1 nova_compute[162974]: 2025-10-09 10:05:45.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:46 compute-1 ceph-mon[9795]: pgmap v982: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:46.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:47 compute-1 sudo[180480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:05:47 compute-1 sudo[180480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:05:47 compute-1 sudo[180480]: pam_unix(sudo:session): session closed for user root
Oct 09 10:05:47 compute-1 sudo[180505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:05:47 compute-1 sudo[180505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:05:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:47.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:47 compute-1 sudo[180505]: pam_unix(sudo:session): session closed for user root
Oct 09 10:05:47 compute-1 sudo[180559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:05:47 compute-1 sudo[180559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:05:47 compute-1 sudo[180559]: pam_unix(sudo:session): session closed for user root
Oct 09 10:05:47 compute-1 sudo[180584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -- inventory --format=json-pretty --filter-for-batch
Oct 09 10:05:47 compute-1 sudo[180584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:05:47 compute-1 nova_compute[162974]: 2025-10-09 10:05:47.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:47 compute-1 podman[180641]: 2025-10-09 10:05:47.92501189 +0000 UTC m=+0.025903314 container create be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 10:05:47 compute-1 systemd[1]: Started libpod-conmon-be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6.scope.
Oct 09 10:05:47 compute-1 systemd[1]: Started libcrun container.
Oct 09 10:05:47 compute-1 podman[180641]: 2025-10-09 10:05:47.979532112 +0000 UTC m=+0.080423526 container init be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mccarthy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 09 10:05:47 compute-1 podman[180641]: 2025-10-09 10:05:47.984671501 +0000 UTC m=+0.085562915 container start be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 09 10:05:47 compute-1 podman[180641]: 2025-10-09 10:05:47.987368105 +0000 UTC m=+0.088259539 container attach be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:05:47 compute-1 clever_mccarthy[180654]: 167 167
Oct 09 10:05:47 compute-1 systemd[1]: libpod-be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6.scope: Deactivated successfully.
Oct 09 10:05:47 compute-1 podman[180641]: 2025-10-09 10:05:47.99009752 +0000 UTC m=+0.090988934 container died be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mccarthy, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 10:05:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-6d256ef6798a82363a63098da41d9e6ae889f9a3c6e2f13c3f6c106729ec0ffc-merged.mount: Deactivated successfully.
Oct 09 10:05:48 compute-1 podman[180641]: 2025-10-09 10:05:47.914479705 +0000 UTC m=+0.015371139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:05:48 compute-1 podman[180641]: 2025-10-09 10:05:48.014218454 +0000 UTC m=+0.115109868 container remove be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mccarthy, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 10:05:48 compute-1 systemd[1]: libpod-conmon-be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6.scope: Deactivated successfully.
Oct 09 10:05:48 compute-1 podman[180676]: 2025-10-09 10:05:48.139312431 +0000 UTC m=+0.028153186 container create dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 09 10:05:48 compute-1 systemd[1]: Started libpod-conmon-dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0.scope.
Oct 09 10:05:48 compute-1 systemd[1]: Started libcrun container.
Oct 09 10:05:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d9434ee2b923c49d8e33bdbe068f53ca52b2d6ba6f4a4866ee42307babef6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 10:05:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d9434ee2b923c49d8e33bdbe068f53ca52b2d6ba6f4a4866ee42307babef6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:05:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d9434ee2b923c49d8e33bdbe068f53ca52b2d6ba6f4a4866ee42307babef6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 10:05:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d9434ee2b923c49d8e33bdbe068f53ca52b2d6ba6f4a4866ee42307babef6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 10:05:48 compute-1 podman[180676]: 2025-10-09 10:05:48.208910679 +0000 UTC m=+0.097751453 container init dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:05:48 compute-1 podman[180676]: 2025-10-09 10:05:48.214768903 +0000 UTC m=+0.103609657 container start dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lehmann, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 09 10:05:48 compute-1 podman[180676]: 2025-10-09 10:05:48.216126953 +0000 UTC m=+0.104967707 container attach dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 09 10:05:48 compute-1 podman[180676]: 2025-10-09 10:05:48.127995568 +0000 UTC m=+0.016836341 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:05:48 compute-1 ceph-mon[9795]: pgmap v983: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:48 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:48 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:48.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]: [
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:     {
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:         "available": false,
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:         "being_replaced": false,
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:         "ceph_device_lvm": false,
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:         "lsm_data": {},
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:         "lvs": [],
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:         "path": "/dev/sr0",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:         "rejected_reasons": [
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "Insufficient space (<5GB)",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "Has a FileSystem"
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:         ],
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:         "sys_api": {
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "actuators": null,
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "device_nodes": [
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:                 "sr0"
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             ],
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "devname": "sr0",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "human_readable_size": "474.00 KB",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "id_bus": "ata",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "model": "QEMU DVD-ROM",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "nr_requests": "64",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "parent": "/dev/sr0",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "partitions": {},
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "path": "/dev/sr0",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "removable": "1",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "rev": "2.5+",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "ro": "0",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "rotational": "0",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "sas_address": "",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "sas_device_handle": "",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "scheduler_mode": "mq-deadline",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "sectors": 0,
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "sectorsize": "2048",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "size": 485376.0,
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "support_discard": "2048",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "type": "disk",
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:             "vendor": "QEMU"
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:         }
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]:     }
Oct 09 10:05:48 compute-1 naughty_lehmann[180690]: ]
Oct 09 10:05:48 compute-1 systemd[1]: libpod-dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0.scope: Deactivated successfully.
Oct 09 10:05:48 compute-1 conmon[180690]: conmon dd710ecd726b5f0b076e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0.scope/container/memory.events
Oct 09 10:05:48 compute-1 podman[180676]: 2025-10-09 10:05:48.770425402 +0000 UTC m=+0.659266155 container died dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 09 10:05:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-64d9434ee2b923c49d8e33bdbe068f53ca52b2d6ba6f4a4866ee42307babef6c-merged.mount: Deactivated successfully.
Oct 09 10:05:48 compute-1 podman[180676]: 2025-10-09 10:05:48.794918186 +0000 UTC m=+0.683758940 container remove dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 10:05:48 compute-1 systemd[1]: libpod-conmon-dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0.scope: Deactivated successfully.
Oct 09 10:05:48 compute-1 sudo[180584]: pam_unix(sudo:session): session closed for user root
Oct 09 10:05:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:49.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:49 compute-1 ceph-mon[9795]: pgmap v984: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:05:49 compute-1 ceph-mon[9795]: pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:05:50 compute-1 nova_compute[162974]: 2025-10-09 10:05:50.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:50.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:51.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:51 compute-1 podman[182067]: 2025-10-09 10:05:51.846298587 +0000 UTC m=+0.061780300 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 10:05:52 compute-1 ceph-mon[9795]: pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:05:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:52.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:52 compute-1 sudo[182090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:05:52 compute-1 sudo[182090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:05:52 compute-1 sudo[182090]: pam_unix(sudo:session): session closed for user root
Oct 09 10:05:52 compute-1 nova_compute[162974]: 2025-10-09 10:05:52.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:53.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:53 compute-1 sudo[182116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:05:53 compute-1 sudo[182116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:05:53 compute-1 sudo[182116]: pam_unix(sudo:session): session closed for user root
Oct 09 10:05:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:53 compute-1 ceph-mon[9795]: pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:05:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:54.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:55 compute-1 nova_compute[162974]: 2025-10-09 10:05:55.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:55 compute-1 nova_compute[162974]: 2025-10-09 10:05:55.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 09 10:05:55 compute-1 nova_compute[162974]: 2025-10-09 10:05:55.136 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 09 10:05:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:55.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:55 compute-1 nova_compute[162974]: 2025-10-09 10:05:55.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:56 compute-1 nova_compute[162974]: 2025-10-09 10:05:56.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:56 compute-1 nova_compute[162974]: 2025-10-09 10:05:56.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 09 10:05:56 compute-1 ceph-mon[9795]: pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:05:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:56.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.124 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.124 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.124 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.146 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.147 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.147 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.147 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.147 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:05:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:57.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:05:57 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1189174758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.521 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:05:57 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 09 10:05:57 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.721 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.722 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4796MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.722 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.723 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.864 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.865 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.953 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Refreshing inventories for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.968 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Updating ProviderTree inventory for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.968 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Updating inventory in ProviderTree for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.980 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Refreshing aggregate associations for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 09 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.996 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Refreshing trait associations for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a, traits: HW_CPU_X86_AESNI,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX512VAES,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 09 10:05:58 compute-1 nova_compute[162974]: 2025-10-09 10:05:58.007 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:05:58 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:05:58 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1898786243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:05:58 compute-1 nova_compute[162974]: 2025-10-09 10:05:58.343 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:05:58 compute-1 nova_compute[162974]: 2025-10-09 10:05:58.348 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:05:58 compute-1 nova_compute[162974]: 2025-10-09 10:05:58.359 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:05:58 compute-1 ceph-mon[9795]: pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:05:58 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1189174758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:05:58 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1898786243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:05:58 compute-1 nova_compute[162974]: 2025-10-09 10:05:58.364 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:05:58 compute-1 nova_compute[162974]: 2025-10-09 10:05:58.365 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:05:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:58.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:59 compute-1 nova_compute[162974]: 2025-10-09 10:05:59.355 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:59 compute-1 nova_compute[162974]: 2025-10-09 10:05:59.356 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:59 compute-1 nova_compute[162974]: 2025-10-09 10:05:59.356 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:59 compute-1 nova_compute[162974]: 2025-10-09 10:05:59.357 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:59 compute-1 nova_compute[162974]: 2025-10-09 10:05:59.357 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:05:59 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1831269240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:05:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:05:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:59.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:00 compute-1 nova_compute[162974]: 2025-10-09 10:06:00.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:06:00 compute-1 nova_compute[162974]: 2025-10-09 10:06:00.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:06:00 compute-1 nova_compute[162974]: 2025-10-09 10:06:00.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:06:00 compute-1 nova_compute[162974]: 2025-10-09 10:06:00.127 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:06:00 compute-1 ceph-mon[9795]: pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:06:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2111307278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:06:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2688699616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:06:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/4016613573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:06:00 compute-1 nova_compute[162974]: 2025-10-09 10:06:00.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:00.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:01.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:01 compute-1 ceph-mon[9795]: pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:06:01 compute-1 podman[182193]: 2025-10-09 10:06:01.509963328 +0000 UTC m=+0.038193624 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:06:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:02.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:02 compute-1 nova_compute[162974]: 2025-10-09 10:06:02.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:03 compute-1 nova_compute[162974]: 2025-10-09 10:06:03.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:06:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:03.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:04 compute-1 sudo[173371]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:04 compute-1 sshd-session[173370]: Received disconnect from 192.168.122.10 port 43312:11: disconnected by user
Oct 09 10:06:04 compute-1 sshd-session[173370]: Disconnected from user zuul 192.168.122.10 port 43312
Oct 09 10:06:04 compute-1 sshd-session[173351]: pam_unix(sshd:session): session closed for user zuul
Oct 09 10:06:04 compute-1 systemd[1]: session-40.scope: Deactivated successfully.
Oct 09 10:06:04 compute-1 systemd[1]: session-40.scope: Consumed 2min 932ms CPU time, 728.4M memory peak, read 272.2M from disk, written 208.9M to disk.
Oct 09 10:06:04 compute-1 systemd-logind[798]: Session 40 logged out. Waiting for processes to exit.
Oct 09 10:06:04 compute-1 systemd-logind[798]: Removed session 40.
Oct 09 10:06:04 compute-1 nova_compute[162974]: 2025-10-09 10:06:04.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:06:04 compute-1 sshd-session[182211]: Accepted publickey for zuul from 192.168.122.10 port 48932 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 10:06:04 compute-1 systemd-logind[798]: New session 42 of user zuul.
Oct 09 10:06:04 compute-1 systemd[1]: Started Session 42 of User zuul.
Oct 09 10:06:04 compute-1 sshd-session[182211]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:06:04 compute-1 sudo[182215]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-1-2025-10-09-tnsenlz.tar.xz
Oct 09 10:06:04 compute-1 sudo[182215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:06:04 compute-1 sudo[182215]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:04 compute-1 sshd-session[182214]: Received disconnect from 192.168.122.10 port 48932:11: disconnected by user
Oct 09 10:06:04 compute-1 sshd-session[182214]: Disconnected from user zuul 192.168.122.10 port 48932
Oct 09 10:06:04 compute-1 sshd-session[182211]: pam_unix(sshd:session): session closed for user zuul
Oct 09 10:06:04 compute-1 systemd[1]: session-42.scope: Deactivated successfully.
Oct 09 10:06:04 compute-1 systemd-logind[798]: Session 42 logged out. Waiting for processes to exit.
Oct 09 10:06:04 compute-1 systemd-logind[798]: Removed session 42.
Oct 09 10:06:04 compute-1 ceph-mon[9795]: pgmap v992: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:06:04 compute-1 sshd-session[182240]: Accepted publickey for zuul from 192.168.122.10 port 48946 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 10:06:04 compute-1 systemd-logind[798]: New session 43 of user zuul.
Oct 09 10:06:04 compute-1 systemd[1]: Started Session 43 of User zuul.
Oct 09 10:06:04 compute-1 sshd-session[182240]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:06:04 compute-1 sudo[182244]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 09 10:06:04 compute-1 sudo[182244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:06:04 compute-1 sudo[182244]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:04 compute-1 sshd-session[182243]: Received disconnect from 192.168.122.10 port 48946:11: disconnected by user
Oct 09 10:06:04 compute-1 sshd-session[182243]: Disconnected from user zuul 192.168.122.10 port 48946
Oct 09 10:06:04 compute-1 sshd-session[182240]: pam_unix(sshd:session): session closed for user zuul
Oct 09 10:06:04 compute-1 systemd[1]: session-43.scope: Deactivated successfully.
Oct 09 10:06:04 compute-1 systemd-logind[798]: Session 43 logged out. Waiting for processes to exit.
Oct 09 10:06:04 compute-1 systemd-logind[798]: Removed session 43.
Oct 09 10:06:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:04.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:06:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:05.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:05 compute-1 nova_compute[162974]: 2025-10-09 10:06:05.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:06 compute-1 ceph-mon[9795]: pgmap v993: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:06:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:06.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:07.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:07 compute-1 ceph-mon[9795]: pgmap v994: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:06:07 compute-1 nova_compute[162974]: 2025-10-09 10:06:07.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:08.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:09.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:06:10.044 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:06:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:06:10.045 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:06:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:06:10.045 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:06:10 compute-1 ceph-mon[9795]: pgmap v995: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:06:10 compute-1 nova_compute[162974]: 2025-10-09 10:06:10.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:10.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:11.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:11 compute-1 podman[182273]: 2025-10-09 10:06:11.535196863 +0000 UTC m=+0.041850529 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 09 10:06:11 compute-1 podman[182274]: 2025-10-09 10:06:11.538422573 +0000 UTC m=+0.044890659 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Oct 09 10:06:12 compute-1 ceph-mon[9795]: pgmap v996: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:06:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/2347622312' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:06:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/2347622312' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:06:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:12.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:12 compute-1 nova_compute[162974]: 2025-10-09 10:06:12.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:13.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:13 compute-1 ceph-mon[9795]: pgmap v997: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:13 compute-1 sudo[182306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:06:13 compute-1 sudo[182306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:13 compute-1 sudo[182306]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:14.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:14 compute-1 systemd[1]: Stopping User Manager for UID 1000...
Oct 09 10:06:14 compute-1 systemd[173355]: Activating special unit Exit the Session...
Oct 09 10:06:14 compute-1 systemd[173355]: Stopped target Main User Target.
Oct 09 10:06:14 compute-1 systemd[173355]: Stopped target Basic System.
Oct 09 10:06:14 compute-1 systemd[173355]: Stopped target Paths.
Oct 09 10:06:14 compute-1 systemd[173355]: Stopped target Sockets.
Oct 09 10:06:14 compute-1 systemd[173355]: Stopped target Timers.
Oct 09 10:06:14 compute-1 systemd[173355]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 09 10:06:14 compute-1 systemd[173355]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 10:06:14 compute-1 systemd[173355]: Closed D-Bus User Message Bus Socket.
Oct 09 10:06:14 compute-1 systemd[173355]: Stopped Create User's Volatile Files and Directories.
Oct 09 10:06:14 compute-1 systemd[173355]: Removed slice User Application Slice.
Oct 09 10:06:14 compute-1 systemd[173355]: Reached target Shutdown.
Oct 09 10:06:14 compute-1 systemd[173355]: Finished Exit the Session.
Oct 09 10:06:14 compute-1 systemd[173355]: Reached target Exit the Session.
Oct 09 10:06:14 compute-1 systemd[1]: user@1000.service: Deactivated successfully.
Oct 09 10:06:14 compute-1 systemd[1]: Stopped User Manager for UID 1000.
Oct 09 10:06:14 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/1000...
Oct 09 10:06:14 compute-1 systemd[1]: run-user-1000.mount: Deactivated successfully.
Oct 09 10:06:14 compute-1 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully.
Oct 09 10:06:14 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/1000.
Oct 09 10:06:14 compute-1 systemd[1]: Removed slice User Slice of UID 1000.
Oct 09 10:06:14 compute-1 systemd[1]: user-1000.slice: Consumed 2min 1.291s CPU time, 734.1M memory peak, read 272.2M from disk, written 208.9M to disk.
Oct 09 10:06:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:15.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:15 compute-1 nova_compute[162974]: 2025-10-09 10:06:15.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 09 10:06:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 09 10:06:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 09 10:06:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 09 10:06:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:16 compute-1 ceph-mon[9795]: pgmap v998: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:16.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:17.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:17 compute-1 nova_compute[162974]: 2025-10-09 10:06:17.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:18 compute-1 ceph-mon[9795]: pgmap v999: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 0 B/s wr, 8 op/s
Oct 09 10:06:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:18.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:19.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:20 compute-1 ceph-mon[9795]: pgmap v1000: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 8 op/s
Oct 09 10:06:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:06:20 compute-1 nova_compute[162974]: 2025-10-09 10:06:20.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:20.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:21.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:22 compute-1 ceph-mon[9795]: pgmap v1001: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Oct 09 10:06:22 compute-1 podman[182336]: 2025-10-09 10:06:22.546248561 +0000 UTC m=+0.057499560 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 10:06:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:22.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:22 compute-1 nova_compute[162974]: 2025-10-09 10:06:22.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:23.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:24 compute-1 ceph-mon[9795]: pgmap v1002: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Oct 09 10:06:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:24.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:25.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:25 compute-1 nova_compute[162974]: 2025-10-09 10:06:25.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:26 compute-1 ceph-mon[9795]: pgmap v1003: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Oct 09 10:06:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:26.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:27 compute-1 ceph-mon[9795]: pgmap v1004: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Oct 09 10:06:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:27.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:27 compute-1 nova_compute[162974]: 2025-10-09 10:06:27.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:28.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:29.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:30 compute-1 ceph-mon[9795]: pgmap v1005: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Oct 09 10:06:30 compute-1 nova_compute[162974]: 2025-10-09 10:06:30.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:30.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:31.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:32 compute-1 ceph-mon[9795]: pgmap v1006: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 0 B/s wr, 130 op/s
Oct 09 10:06:32 compute-1 podman[182364]: 2025-10-09 10:06:32.554673917 +0000 UTC m=+0.067274196 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 09 10:06:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:32.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:32 compute-1 nova_compute[162974]: 2025-10-09 10:06:32.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:33.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:33 compute-1 sudo[182382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:06:33 compute-1 sudo[182382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:33 compute-1 sudo[182382]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:34 compute-1 ceph-mon[9795]: pgmap v1007: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:34.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:06:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:35.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:35 compute-1 nova_compute[162974]: 2025-10-09 10:06:35.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:36 compute-1 ceph-mon[9795]: pgmap v1008: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:36.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:06:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:37.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:06:37 compute-1 nova_compute[162974]: 2025-10-09 10:06:37.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:38 compute-1 ceph-mon[9795]: pgmap v1009: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:06:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:38.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:39.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:40 compute-1 ceph-mon[9795]: pgmap v1010: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:40 compute-1 nova_compute[162974]: 2025-10-09 10:06:40.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:40.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:41 compute-1 ceph-mon[9795]: pgmap v1011: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:06:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:41.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:42 compute-1 podman[182411]: 2025-10-09 10:06:42.535186446 +0000 UTC m=+0.042874185 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 10:06:42 compute-1 podman[182412]: 2025-10-09 10:06:42.53555009 +0000 UTC m=+0.042071371 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:06:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:42.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:42 compute-1 nova_compute[162974]: 2025-10-09 10:06:42.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:43.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:44 compute-1 ceph-mon[9795]: pgmap v1012: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:44.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:45.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:45 compute-1 nova_compute[162974]: 2025-10-09 10:06:45.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:46 compute-1 ceph-mon[9795]: pgmap v1013: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:06:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:46.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:06:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:47.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:47 compute-1 nova_compute[162974]: 2025-10-09 10:06:47.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:48 compute-1 ceph-mon[9795]: pgmap v1014: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:06:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:48.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:49.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:50 compute-1 ceph-mon[9795]: pgmap v1015: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:06:50 compute-1 nova_compute[162974]: 2025-10-09 10:06:50.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:50.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:51.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:52 compute-1 ceph-mon[9795]: pgmap v1016: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:06:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:52.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:52 compute-1 nova_compute[162974]: 2025-10-09 10:06:52.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:52 compute-1 sudo[182449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:06:52 compute-1 sudo[182449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:52 compute-1 sudo[182449]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:53 compute-1 sudo[182481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 09 10:06:53 compute-1 sudo[182481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:53 compute-1 podman[182473]: 2025-10-09 10:06:53.050225299 +0000 UTC m=+0.060120028 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 09 10:06:53 compute-1 sudo[182481]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:53 compute-1 sudo[182541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:06:53 compute-1 sudo[182541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:53 compute-1 sudo[182541]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:53 compute-1 sudo[182566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:06:53 compute-1 sudo[182566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:53.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:53 compute-1 sudo[182615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:06:53 compute-1 sudo[182566]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:53 compute-1 sudo[182615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:53 compute-1 sudo[182615]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:53 compute-1 sudo[182646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:06:53 compute-1 sudo[182646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:53 compute-1 sudo[182646]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:53 compute-1 sudo[182671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 09 10:06:53 compute-1 sudo[182671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:54 compute-1 sudo[182671]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-1 ceph-mon[9795]: pgmap v1017: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:06:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:54.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:06:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:55.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:55 compute-1 nova_compute[162974]: 2025-10-09 10:06:55.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:56 compute-1 ceph-mon[9795]: pgmap v1018: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:06:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:06:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:56.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.123 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.139 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.140 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:06:57 compute-1 ceph-mon[9795]: pgmap v1019: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:06:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:57.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:06:57 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/703560270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.485 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.679 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.680 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4960MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.680 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.681 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.780 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.780 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.792 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:58 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:06:58 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1093705681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:06:58 compute-1 nova_compute[162974]: 2025-10-09 10:06:58.126 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:06:58 compute-1 nova_compute[162974]: 2025-10-09 10:06:58.129 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:06:58 compute-1 nova_compute[162974]: 2025-10-09 10:06:58.144 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:06:58 compute-1 nova_compute[162974]: 2025-10-09 10:06:58.146 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:06:58 compute-1 nova_compute[162974]: 2025-10-09 10:06:58.146 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:06:58 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/703560270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:06:58 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1093705681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:06:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:58.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:58 compute-1 sudo[182758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:06:58 compute-1 sudo[182758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:58 compute-1 sudo[182758]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:59 compute-1 ceph-mon[9795]: pgmap v1020: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:06:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:06:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:59.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.138 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.151 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.151 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.151 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.151 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.151 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.151 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:07:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3815037012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:07:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2836267413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:07:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1231242342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:07:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2014211279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:00.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:01 compute-1 nova_compute[162974]: 2025-10-09 10:07:01.123 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:01 compute-1 ceph-mon[9795]: pgmap v1021: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:07:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:01.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.507222) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421507253, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2311, "num_deletes": 259, "total_data_size": 5769178, "memory_usage": 5868800, "flush_reason": "Manual Compaction"}
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421515649, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 3653044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28293, "largest_seqno": 30599, "table_properties": {"data_size": 3642593, "index_size": 6305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 26278, "raw_average_key_size": 21, "raw_value_size": 3620093, "raw_average_value_size": 3006, "num_data_blocks": 273, "num_entries": 1204, "num_filter_entries": 1204, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004276, "oldest_key_time": 1760004276, "file_creation_time": 1760004421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 8452 microseconds, and 6895 cpu microseconds.
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.515673) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 3653044 bytes OK
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.515710) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.516041) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.516051) EVENT_LOG_v1 {"time_micros": 1760004421516048, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.516062) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 5757734, prev total WAL file size 5757734, number of live WAL files 2.
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.517142) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353034' seq:72057594037927935, type:22 .. '6C6F676D00373539' seq:0, type:0; will stop at (end)
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(3567KB)], [54(13MB)]
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421517181, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 17780134, "oldest_snapshot_seqno": -1}
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6469 keys, 17623957 bytes, temperature: kUnknown
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421555199, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 17623957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17577246, "index_size": 29449, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164696, "raw_average_key_size": 25, "raw_value_size": 17457123, "raw_average_value_size": 2698, "num_data_blocks": 1206, "num_entries": 6469, "num_filter_entries": 6469, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760004421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.555414) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17623957 bytes
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.563275) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 468.4 rd, 464.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 13.5 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(9.7) write-amplify(4.8) OK, records in: 7005, records dropped: 536 output_compression: NoCompression
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.563291) EVENT_LOG_v1 {"time_micros": 1760004421563283, "job": 32, "event": "compaction_finished", "compaction_time_micros": 37958, "compaction_time_cpu_micros": 24532, "output_level": 6, "num_output_files": 1, "total_output_size": 17623957, "num_input_records": 7005, "num_output_records": 6469, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421564190, "job": 32, "event": "table_file_deletion", "file_number": 56}
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421566164, "job": 32, "event": "table_file_deletion", "file_number": 54}
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.517106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.566239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.566242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.566243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.566244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.566245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:02 compute-1 nova_compute[162974]: 2025-10-09 10:07:02.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:02 compute-1 nova_compute[162974]: 2025-10-09 10:07:02.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:07:02 compute-1 nova_compute[162974]: 2025-10-09 10:07:02.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:07:02 compute-1 nova_compute[162974]: 2025-10-09 10:07:02.126 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:07:02 compute-1 ceph-mon[9795]: pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:07:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:02.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:02 compute-1 nova_compute[162974]: 2025-10-09 10:07:02.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:03.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:03 compute-1 podman[182786]: 2025-10-09 10:07:03.535156632 +0000 UTC m=+0.041016042 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid)
Oct 09 10:07:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:04.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:04 compute-1 ceph-mon[9795]: pgmap v1023: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:07:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:07:05 compute-1 nova_compute[162974]: 2025-10-09 10:07:05.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:05.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:05 compute-1 nova_compute[162974]: 2025-10-09 10:07:05.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:06.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:06 compute-1 ceph-mon[9795]: pgmap v1024: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2 op/s
Oct 09 10:07:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:07.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:07 compute-1 nova_compute[162974]: 2025-10-09 10:07:07.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:08.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:08 compute-1 ceph-mon[9795]: pgmap v1025: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:09.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:07:10.045 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:07:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:07:10.045 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:07:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:07:10.045 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:07:10 compute-1 nova_compute[162974]: 2025-10-09 10:07:10.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:10.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:10 compute-1 ceph-mon[9795]: pgmap v1026: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:07:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:11.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:07:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 09 10:07:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3653895063' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:07:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 09 10:07:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3653895063' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:07:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:12.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:12 compute-1 ceph-mon[9795]: pgmap v1027: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1 op/s
Oct 09 10:07:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/3653895063' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:07:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/3653895063' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:07:12 compute-1 nova_compute[162974]: 2025-10-09 10:07:12.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:07:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:13.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:07:13 compute-1 podman[182809]: 2025-10-09 10:07:13.527739021 +0000 UTC m=+0.037793405 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 09 10:07:13 compute-1 podman[182808]: 2025-10-09 10:07:13.554298535 +0000 UTC m=+0.065501635 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:07:13 compute-1 sudo[182842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:07:13 compute-1 sudo[182842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:07:13 compute-1 sudo[182842]: pam_unix(sudo:session): session closed for user root
Oct 09 10:07:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:14.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:14 compute-1 ceph-mon[9795]: pgmap v1028: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:15.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:15 compute-1 nova_compute[162974]: 2025-10-09 10:07:15.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:16.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:16 compute-1 ceph-mon[9795]: pgmap v1029: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:17.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:17 compute-1 nova_compute[162974]: 2025-10-09 10:07:17.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:18.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:18 compute-1 ceph-mon[9795]: pgmap v1030: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:19.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:07:20 compute-1 nova_compute[162974]: 2025-10-09 10:07:20.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:20.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:20 compute-1 ceph-mon[9795]: pgmap v1031: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:21.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:07:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:22.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:07:22 compute-1 ceph-mon[9795]: pgmap v1032: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:22 compute-1 nova_compute[162974]: 2025-10-09 10:07:22.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:07:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:23.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:07:23 compute-1 podman[182872]: 2025-10-09 10:07:23.537438451 +0000 UTC m=+0.050503218 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 09 10:07:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:24.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:24 compute-1 ceph-mon[9795]: pgmap v1033: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:25.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:25 compute-1 nova_compute[162974]: 2025-10-09 10:07:25.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:26.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:26 compute-1 ceph-mon[9795]: pgmap v1034: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:27.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:27 compute-1 nova_compute[162974]: 2025-10-09 10:07:27.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:07:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:28.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:07:28 compute-1 ceph-mon[9795]: pgmap v1035: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:07:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:29.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:07:30 compute-1 nova_compute[162974]: 2025-10-09 10:07:30.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:30.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:30 compute-1 ceph-mon[9795]: pgmap v1036: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:31.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:32.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:32 compute-1 ceph-mon[9795]: pgmap v1037: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:32 compute-1 nova_compute[162974]: 2025-10-09 10:07:32.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:33.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:33 compute-1 sudo[182900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:07:33 compute-1 sudo[182900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:07:33 compute-1 sudo[182900]: pam_unix(sudo:session): session closed for user root
Oct 09 10:07:33 compute-1 podman[182924]: 2025-10-09 10:07:33.937339538 +0000 UTC m=+0.041592718 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 09 10:07:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:34.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:34 compute-1 ceph-mon[9795]: pgmap v1038: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:07:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:35.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:35 compute-1 nova_compute[162974]: 2025-10-09 10:07:35.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:36.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:36 compute-1 ceph-mon[9795]: pgmap v1039: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:37.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:37 compute-1 nova_compute[162974]: 2025-10-09 10:07:37.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:38.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:38 compute-1 ceph-mon[9795]: pgmap v1040: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:39.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:40 compute-1 nova_compute[162974]: 2025-10-09 10:07:40.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:40.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:40 compute-1 ceph-mon[9795]: pgmap v1041: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:41.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:07:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:42.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:07:42 compute-1 nova_compute[162974]: 2025-10-09 10:07:42.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:42 compute-1 ceph-mon[9795]: pgmap v1042: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:07:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:43.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:07:44 compute-1 podman[182947]: 2025-10-09 10:07:44.536163107 +0000 UTC m=+0.039279658 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS)
Oct 09 10:07:44 compute-1 podman[182946]: 2025-10-09 10:07:44.536225113 +0000 UTC m=+0.038577514 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 09 10:07:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:44.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:44 compute-1 ceph-mon[9795]: pgmap v1043: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:45.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:45 compute-1 nova_compute[162974]: 2025-10-09 10:07:45.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:46.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:46 compute-1 ceph-mon[9795]: pgmap v1044: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:47.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:47 compute-1 nova_compute[162974]: 2025-10-09 10:07:47.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:48.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:48 compute-1 ceph-mon[9795]: pgmap v1045: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.952925) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468952989, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 660, "num_deletes": 251, "total_data_size": 1270763, "memory_usage": 1283312, "flush_reason": "Manual Compaction"}
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468956139, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 836219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30604, "largest_seqno": 31259, "table_properties": {"data_size": 832917, "index_size": 1210, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7395, "raw_average_key_size": 19, "raw_value_size": 826465, "raw_average_value_size": 2124, "num_data_blocks": 55, "num_entries": 389, "num_filter_entries": 389, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004422, "oldest_key_time": 1760004422, "file_creation_time": 1760004468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 3223 microseconds, and 2278 cpu microseconds.
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.956159) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 836219 bytes OK
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.956171) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.956864) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.956874) EVENT_LOG_v1 {"time_micros": 1760004468956871, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.956886) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1267152, prev total WAL file size 1267152, number of live WAL files 2.
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.957193) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(816KB)], [57(16MB)]
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468957218, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 18460176, "oldest_snapshot_seqno": -1}
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 6347 keys, 16355035 bytes, temperature: kUnknown
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468995380, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 16355035, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16310123, "index_size": 27970, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 162813, "raw_average_key_size": 25, "raw_value_size": 16193121, "raw_average_value_size": 2551, "num_data_blocks": 1141, "num_entries": 6347, "num_filter_entries": 6347, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760004468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.995519) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 16355035 bytes
Oct 09 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:49.001625) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 483.2 rd, 428.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 16.8 +0.0 blob) out(15.6 +0.0 blob), read-write-amplify(41.6) write-amplify(19.6) OK, records in: 6858, records dropped: 511 output_compression: NoCompression
Oct 09 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:49.001638) EVENT_LOG_v1 {"time_micros": 1760004469001633, "job": 34, "event": "compaction_finished", "compaction_time_micros": 38204, "compaction_time_cpu_micros": 23187, "output_level": 6, "num_output_files": 1, "total_output_size": 16355035, "num_input_records": 6858, "num_output_records": 6347, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 10:07:49 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:07:49 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004469001809, "job": 34, "event": "table_file_deletion", "file_number": 59}
Oct 09 10:07:49 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:07:49 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004469003561, "job": 34, "event": "table_file_deletion", "file_number": 57}
Oct 09 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.957158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:49.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:07:50 compute-1 nova_compute[162974]: 2025-10-09 10:07:50.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:50.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:50 compute-1 ceph-mon[9795]: pgmap v1046: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:51.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:52.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:52 compute-1 nova_compute[162974]: 2025-10-09 10:07:52.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:52 compute-1 ceph-mon[9795]: pgmap v1047: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:53.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:53 compute-1 sudo[182985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:07:53 compute-1 sudo[182985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:07:53 compute-1 sudo[182985]: pam_unix(sudo:session): session closed for user root
Oct 09 10:07:54 compute-1 podman[183009]: 2025-10-09 10:07:54.02951601 +0000 UTC m=+0.061431852 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 09 10:07:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:07:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:54.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:07:54 compute-1 ceph-mon[9795]: pgmap v1048: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:55.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:55 compute-1 nova_compute[162974]: 2025-10-09 10:07:55.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:56.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:56 compute-1 ceph-mon[9795]: pgmap v1049: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:57.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:57 compute-1 nova_compute[162974]: 2025-10-09 10:07:57.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:58.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:58 compute-1 ceph-mon[9795]: pgmap v1050: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.138 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.139 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.139 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:07:59 compute-1 sudo[183035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:07:59 compute-1 sudo[183035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:07:59 compute-1 sudo[183035]: pam_unix(sudo:session): session closed for user root
Oct 09 10:07:59 compute-1 sudo[183061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:07:59 compute-1 sudo[183061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:07:59 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:07:59 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/273661975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.480 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:07:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:07:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:59.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:59 compute-1 sudo[183061]: pam_unix(sudo:session): session closed for user root
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.680 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.681 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4983MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.681 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.681 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.726 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.727 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.738 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:08:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/273661975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:08:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/700554729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:08:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:08:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:08:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:08:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:08:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:08:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:08:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:08:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:08:00 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1270285005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:08:00 compute-1 nova_compute[162974]: 2025-10-09 10:08:00.072 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:08:00 compute-1 nova_compute[162974]: 2025-10-09 10:08:00.075 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:08:00 compute-1 nova_compute[162974]: 2025-10-09 10:08:00.088 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:08:00 compute-1 nova_compute[162974]: 2025-10-09 10:08:00.089 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:08:00 compute-1 nova_compute[162974]: 2025-10-09 10:08:00.089 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:08:00 compute-1 nova_compute[162974]: 2025-10-09 10:08:00.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:00.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:01 compute-1 ceph-mon[9795]: pgmap v1051: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 10:08:01 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1270285005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:08:01 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1534538487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:08:01 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2268930699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:08:01 compute-1 nova_compute[162974]: 2025-10-09 10:08:01.088 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:01 compute-1 nova_compute[162974]: 2025-10-09 10:08:01.089 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:01 compute-1 nova_compute[162974]: 2025-10-09 10:08:01.089 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:01 compute-1 nova_compute[162974]: 2025-10-09 10:08:01.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:01 compute-1 nova_compute[162974]: 2025-10-09 10:08:01.113 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:08:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:01.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:02 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/33496731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:08:02 compute-1 nova_compute[162974]: 2025-10-09 10:08:02.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:02 compute-1 nova_compute[162974]: 2025-10-09 10:08:02.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:08:02 compute-1 nova_compute[162974]: 2025-10-09 10:08:02.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:08:02 compute-1 nova_compute[162974]: 2025-10-09 10:08:02.125 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:08:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:02.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:02 compute-1 nova_compute[162974]: 2025-10-09 10:08:02.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:03 compute-1 ceph-mon[9795]: pgmap v1052: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:03 compute-1 nova_compute[162974]: 2025-10-09 10:08:03.122 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:03.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:03 compute-1 sudo[183161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:08:03 compute-1 sudo[183161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:08:03 compute-1 sudo[183161]: pam_unix(sudo:session): session closed for user root
Oct 09 10:08:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:08:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:08:04 compute-1 ceph-mon[9795]: pgmap v1053: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 10:08:04 compute-1 podman[183186]: 2025-10-09 10:08:04.537202913 +0000 UTC m=+0.044867863 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 09 10:08:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:04.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:08:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:05.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:05 compute-1 nova_compute[162974]: 2025-10-09 10:08:05.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:06 compute-1 nova_compute[162974]: 2025-10-09 10:08:06.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:06 compute-1 ceph-mon[9795]: pgmap v1054: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:06.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:07.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:07 compute-1 nova_compute[162974]: 2025-10-09 10:08:07.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:08.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:08 compute-1 ceph-mon[9795]: pgmap v1055: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 10:08:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:09.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:08:10.045 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:08:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:08:10.046 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:08:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:08:10.046 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:08:10 compute-1 nova_compute[162974]: 2025-10-09 10:08:10.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:10.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:10 compute-1 ceph-mon[9795]: pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 10:08:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:11.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:12.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:12 compute-1 ceph-mon[9795]: pgmap v1057: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/1775733170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:08:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/1775733170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:08:12 compute-1 nova_compute[162974]: 2025-10-09 10:08:12.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:13.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:14 compute-1 sudo[183212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:08:14 compute-1 sudo[183212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:08:14 compute-1 sudo[183212]: pam_unix(sudo:session): session closed for user root
Oct 09 10:08:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:14.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:14 compute-1 ceph-mon[9795]: pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:15.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:15 compute-1 podman[183239]: 2025-10-09 10:08:15.527365412 +0000 UTC m=+0.037722221 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 09 10:08:15 compute-1 podman[183238]: 2025-10-09 10:08:15.556192321 +0000 UTC m=+0.068003162 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 09 10:08:15 compute-1 nova_compute[162974]: 2025-10-09 10:08:15.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:16.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:16 compute-1 ceph-mon[9795]: pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:17.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:17 compute-1 nova_compute[162974]: 2025-10-09 10:08:17.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:18.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:18 compute-1 ceph-mon[9795]: pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:19.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:08:20 compute-1 nova_compute[162974]: 2025-10-09 10:08:20.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:20.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:20 compute-1 ceph-mon[9795]: pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:20 compute-1 nova_compute[162974]: 2025-10-09 10:08:20.972 2 DEBUG oslo_concurrency.processutils [None req-06752881-e4c7-4336-b1c1-bcd187f39813 3a4ac457589b496085910d92d06034e7 a53d5690b6a54109990182326650a2b8 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:08:20 compute-1 nova_compute[162974]: 2025-10-09 10:08:20.986 2 DEBUG oslo_concurrency.processutils [None req-06752881-e4c7-4336-b1c1-bcd187f39813 3a4ac457589b496085910d92d06034e7 a53d5690b6a54109990182326650a2b8 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:08:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:21.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:22.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:22 compute-1 ceph-mon[9795]: pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:22 compute-1 nova_compute[162974]: 2025-10-09 10:08:22.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:23.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:24 compute-1 podman[183276]: 2025-10-09 10:08:24.546162912 +0000 UTC m=+0.058751728 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:08:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:24.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:24 compute-1 ceph-mon[9795]: pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:25.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:25 compute-1 nova_compute[162974]: 2025-10-09 10:08:25.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:25 compute-1 nova_compute[162974]: 2025-10-09 10:08:25.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:25 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:08:25.715 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:08:25 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:08:25.715 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 10:08:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:26.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:26 compute-1 ceph-mon[9795]: pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:27.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:27 compute-1 nova_compute[162974]: 2025-10-09 10:08:27.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:28.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:28 compute-1 ceph-mon[9795]: pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:29.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:30 compute-1 nova_compute[162974]: 2025-10-09 10:08:30.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:30.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:30 compute-1 ceph-mon[9795]: pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:31.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:08:32.718 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:08:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:32.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:32 compute-1 ceph-mon[9795]: pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:32 compute-1 nova_compute[162974]: 2025-10-09 10:08:32.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:33.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:34 compute-1 sudo[183304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:08:34 compute-1 sudo[183304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:08:34 compute-1 sudo[183304]: pam_unix(sudo:session): session closed for user root
Oct 09 10:08:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:34.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:34 compute-1 ceph-mon[9795]: pgmap v1068: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:08:35 compute-1 podman[183330]: 2025-10-09 10:08:35.528155361 +0000 UTC m=+0.037995266 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:08:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:35.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:35 compute-1 nova_compute[162974]: 2025-10-09 10:08:35.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:36.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:36 compute-1 ceph-mon[9795]: pgmap v1069: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:08:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:37.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:37 compute-1 nova_compute[162974]: 2025-10-09 10:08:37.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:38.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:38 compute-1 ceph-mon[9795]: pgmap v1070: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:08:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:39.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:40 compute-1 nova_compute[162974]: 2025-10-09 10:08:40.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:40.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:40 compute-1 ceph-mon[9795]: pgmap v1071: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:08:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:41.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:42.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:42 compute-1 nova_compute[162974]: 2025-10-09 10:08:42.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:42 compute-1 ceph-mon[9795]: pgmap v1072: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:08:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:43.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:08:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:44.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:08:44 compute-1 ceph-mon[9795]: pgmap v1073: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:08:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:45.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:45 compute-1 nova_compute[162974]: 2025-10-09 10:08:45.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:46 compute-1 podman[183352]: 2025-10-09 10:08:46.526260584 +0000 UTC m=+0.038154905 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 09 10:08:46 compute-1 podman[183353]: 2025-10-09 10:08:46.531093336 +0000 UTC m=+0.040312694 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 09 10:08:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:46.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:46 compute-1 ceph-mon[9795]: pgmap v1074: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:08:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:47.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:47 compute-1 nova_compute[162974]: 2025-10-09 10:08:47.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:08:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:48.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:08:48 compute-1 ceph-mon[9795]: pgmap v1075: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:49.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:08:50 compute-1 nova_compute[162974]: 2025-10-09 10:08:50.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:50.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:50 compute-1 ceph-mon[9795]: pgmap v1076: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:51.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:52.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:52 compute-1 nova_compute[162974]: 2025-10-09 10:08:52.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:53 compute-1 ceph-mon[9795]: pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:53.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:54 compute-1 sudo[183390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:08:54 compute-1 sudo[183390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:08:54 compute-1 sudo[183390]: pam_unix(sudo:session): session closed for user root
Oct 09 10:08:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:54.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:55 compute-1 ceph-mon[9795]: pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:55 compute-1 podman[183416]: 2025-10-09 10:08:55.558448209 +0000 UTC m=+0.064277878 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 09 10:08:55 compute-1 nova_compute[162974]: 2025-10-09 10:08:55.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:56.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:57 compute-1 ceph-mon[9795]: pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:57.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:57 compute-1 nova_compute[162974]: 2025-10-09 10:08:57.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:58.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:59 compute-1 ceph-mon[9795]: pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:08:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:59.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.124 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.124 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.124 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.124 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.137 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.138 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.138 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.138 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.138 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:09:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:09:00 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2013677285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.482 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.686 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.687 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4985MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.687 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.688 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.728 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.728 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.746 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:09:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:00.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:01 compute-1 ceph-mon[9795]: pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:01 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2013677285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:01 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1526071422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:01 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:09:01 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3812894481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:01 compute-1 nova_compute[162974]: 2025-10-09 10:09:01.084 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:09:01 compute-1 nova_compute[162974]: 2025-10-09 10:09:01.088 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:09:01 compute-1 nova_compute[162974]: 2025-10-09 10:09:01.099 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:09:01 compute-1 nova_compute[162974]: 2025-10-09 10:09:01.101 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:09:01 compute-1 nova_compute[162974]: 2025-10-09 10:09:01.101 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:09:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:01.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:02 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3812894481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:02 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1405609489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:02.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:02 compute-1 nova_compute[162974]: 2025-10-09 10:09:02.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:03 compute-1 ceph-mon[9795]: pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:03 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2740802142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.091 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.091 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.091 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.102 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.102 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.102 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.103 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.121 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:03.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:03 compute-1 sudo[183489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:09:03 compute-1 sudo[183489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:09:03 compute-1 sudo[183489]: pam_unix(sudo:session): session closed for user root
Oct 09 10:09:03 compute-1 sudo[183514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:09:03 compute-1 sudo[183514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:09:04 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2969860949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:04 compute-1 sudo[183514]: pam_unix(sudo:session): session closed for user root
Oct 09 10:09:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:04.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:05 compute-1 ceph-mon[9795]: pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 09 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 09 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:09:05 compute-1 ceph-mon[9795]: pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:09:05 compute-1 ceph-mon[9795]: pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 718 B/s rd, 0 op/s
Oct 09 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:09:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:05.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:05 compute-1 nova_compute[162974]: 2025-10-09 10:09:05.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:06 compute-1 nova_compute[162974]: 2025-10-09 10:09:06.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:06 compute-1 podman[183570]: 2025-10-09 10:09:06.529101678 +0000 UTC m=+0.037170290 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 09 10:09:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:06.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:07 compute-1 ceph-mon[9795]: pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 09 10:09:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:07.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:07 compute-1 nova_compute[162974]: 2025-10-09 10:09:07.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:08 compute-1 sudo[183588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:09:08 compute-1 sudo[183588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:09:08 compute-1 sudo[183588]: pam_unix(sudo:session): session closed for user root
Oct 09 10:09:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:08.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:09 compute-1 ceph-mon[9795]: pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 09 10:09:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:09:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:09:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:09.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:09:10.046 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:09:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:09:10.047 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:09:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:09:10.047 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:09:10 compute-1 nova_compute[162974]: 2025-10-09 10:09:10.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:10.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:11 compute-1 ceph-mon[9795]: pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 718 B/s rd, 0 op/s
Oct 09 10:09:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:11.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/3764022889' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:09:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/3764022889' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:09:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:12.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:12 compute-1 nova_compute[162974]: 2025-10-09 10:09:12.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:13 compute-1 ceph-mon[9795]: pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 09 10:09:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:13.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:14 compute-1 sudo[183616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:09:14 compute-1 sudo[183616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:09:14 compute-1 sudo[183616]: pam_unix(sudo:session): session closed for user root
Oct 09 10:09:14 compute-1 ceph-mon[9795]: pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:09:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:14.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:15.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:15 compute-1 nova_compute[162974]: 2025-10-09 10:09:15.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:16.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:17 compute-1 ceph-mon[9795]: pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:17 compute-1 podman[183643]: 2025-10-09 10:09:17.527703528 +0000 UTC m=+0.039389855 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 09 10:09:17 compute-1 podman[183644]: 2025-10-09 10:09:17.528230692 +0000 UTC m=+0.038517610 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS)
Oct 09 10:09:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:17.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:17 compute-1 nova_compute[162974]: 2025-10-09 10:09:17.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:18.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:19 compute-1 ceph-mon[9795]: pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:19.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:09:20 compute-1 nova_compute[162974]: 2025-10-09 10:09:20.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:20.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:21 compute-1 ceph-mon[9795]: pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:21.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:22.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:22 compute-1 nova_compute[162974]: 2025-10-09 10:09:22.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:23 compute-1 ceph-mon[9795]: pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:23.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:24.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:25 compute-1 ceph-mon[9795]: pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:25.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:25 compute-1 nova_compute[162974]: 2025-10-09 10:09:25.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:26 compute-1 podman[183682]: 2025-10-09 10:09:26.547710197 +0000 UTC m=+0.058386480 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 09 10:09:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:26.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:27 compute-1 ceph-mon[9795]: pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:27.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:27 compute-1 nova_compute[162974]: 2025-10-09 10:09:27.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:28.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:29 compute-1 ceph-mon[9795]: pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:29.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:30 compute-1 nova_compute[162974]: 2025-10-09 10:09:30.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:30.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:31 compute-1 ceph-mon[9795]: pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:31.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:32.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:32 compute-1 nova_compute[162974]: 2025-10-09 10:09:32.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:33 compute-1 ceph-mon[9795]: pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:33.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:34 compute-1 sudo[183709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:09:34 compute-1 sudo[183709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:09:34 compute-1 sudo[183709]: pam_unix(sudo:session): session closed for user root
Oct 09 10:09:34 compute-1 ceph-mon[9795]: pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:34.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:09:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:35.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:35 compute-1 nova_compute[162974]: 2025-10-09 10:09:35.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:36 compute-1 ceph-mon[9795]: pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:36.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:37 compute-1 podman[183736]: 2025-10-09 10:09:37.524172797 +0000 UTC m=+0.036116882 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 09 10:09:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:37.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:37 compute-1 nova_compute[162974]: 2025-10-09 10:09:37.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:38.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:39 compute-1 ceph-mon[9795]: pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:39.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:40 compute-1 nova_compute[162974]: 2025-10-09 10:09:40.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:40.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:41 compute-1 ceph-mon[9795]: pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:41.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:42.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:42 compute-1 nova_compute[162974]: 2025-10-09 10:09:42.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:43 compute-1 ceph-mon[9795]: pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:43.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:44.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:45 compute-1 ceph-mon[9795]: pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:45.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:45 compute-1 nova_compute[162974]: 2025-10-09 10:09:45.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:46.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:47 compute-1 ceph-mon[9795]: pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:47.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:47 compute-1 nova_compute[162974]: 2025-10-09 10:09:47.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:48 compute-1 podman[183759]: 2025-10-09 10:09:48.525371366 +0000 UTC m=+0.033113341 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 09 10:09:48 compute-1 podman[183760]: 2025-10-09 10:09:48.537422868 +0000 UTC m=+0.042583457 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:09:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:48.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:49 compute-1 ceph-mon[9795]: pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:49.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:09:50 compute-1 nova_compute[162974]: 2025-10-09 10:09:50.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:50.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:51 compute-1 ceph-mon[9795]: pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:51.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:52.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:52 compute-1 nova_compute[162974]: 2025-10-09 10:09:52.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:53 compute-1 ceph-mon[9795]: pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:53.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:54 compute-1 sudo[183796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:09:54 compute-1 sudo[183796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:09:54 compute-1 sudo[183796]: pam_unix(sudo:session): session closed for user root
Oct 09 10:09:54 compute-1 ceph-mon[9795]: pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:54.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:55.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:55 compute-1 nova_compute[162974]: 2025-10-09 10:09:55.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:56.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:57 compute-1 ceph-mon[9795]: pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:57 compute-1 podman[183823]: 2025-10-09 10:09:57.540451708 +0000 UTC m=+0.053526277 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 10:09:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:57.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:57 compute-1 nova_compute[162974]: 2025-10-09 10:09:57.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:58.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:59 compute-1 ceph-mon[9795]: pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:09:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:09:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:59.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.128 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.128 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.128 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.128 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.128 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:10:00 compute-1 ceph-mon[9795]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct 09 10:10:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:10:00 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2459378375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.460 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.656 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.657 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4993MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.658 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.658 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.713 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.713 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.724 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:10:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:00.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:01 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:10:01 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2459720255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:01 compute-1 nova_compute[162974]: 2025-10-09 10:10:01.062 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:10:01 compute-1 nova_compute[162974]: 2025-10-09 10:10:01.066 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:10:01 compute-1 nova_compute[162974]: 2025-10-09 10:10:01.076 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:10:01 compute-1 nova_compute[162974]: 2025-10-09 10:10:01.077 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:10:01 compute-1 nova_compute[162974]: 2025-10-09 10:10:01.078 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:10:01 compute-1 ceph-mon[9795]: pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:01 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2459378375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:01 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2459720255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:01.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:02 compute-1 nova_compute[162974]: 2025-10-09 10:10:02.079 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:02 compute-1 nova_compute[162974]: 2025-10-09 10:10:02.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:02 compute-1 nova_compute[162974]: 2025-10-09 10:10:02.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:10:02 compute-1 nova_compute[162974]: 2025-10-09 10:10:02.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:10:02 compute-1 nova_compute[162974]: 2025-10-09 10:10:02.124 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:10:02 compute-1 nova_compute[162974]: 2025-10-09 10:10:02.125 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:02 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1012934223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:02 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1023247151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:02.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:02 compute-1 nova_compute[162974]: 2025-10-09 10:10:02.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:03 compute-1 nova_compute[162974]: 2025-10-09 10:10:03.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:03 compute-1 nova_compute[162974]: 2025-10-09 10:10:03.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:10:03 compute-1 ceph-mon[9795]: pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:03.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:04 compute-1 systemd[1]: Starting system activity accounting tool...
Oct 09 10:10:04 compute-1 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 10:10:04 compute-1 systemd[1]: Finished system activity accounting tool.
Oct 09 10:10:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:04.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:05 compute-1 nova_compute[162974]: 2025-10-09 10:10:05.109 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:10:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139326846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:05 compute-1 ceph-mon[9795]: pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:10:05 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1732046277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:05 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4139326846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:10:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:05.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:10:05 compute-1 nova_compute[162974]: 2025-10-09 10:10:05.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:06 compute-1 nova_compute[162974]: 2025-10-09 10:10:06.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:06.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:07 compute-1 ceph-mon[9795]: pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:07.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:07 compute-1 nova_compute[162974]: 2025-10-09 10:10:07.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:08 compute-1 podman[183896]: 2025-10-09 10:10:08.530622736 +0000 UTC m=+0.041730468 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid)
Oct 09 10:10:08 compute-1 sudo[183913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:10:08 compute-1 sudo[183913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:08 compute-1 sudo[183913]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:08 compute-1 sudo[183938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 10:10:08 compute-1 sudo[183938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:08.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:09 compute-1 podman[184019]: 2025-10-09 10:10:09.066088268 +0000 UTC m=+0.036887256 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 09 10:10:09 compute-1 podman[184019]: 2025-10-09 10:10:09.150975521 +0000 UTC m=+0.121774529 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 09 10:10:09 compute-1 ceph-mon[9795]: pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:09 compute-1 podman[184114]: 2025-10-09 10:10:09.447673613 +0000 UTC m=+0.038571533 container exec 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 10:10:09 compute-1 podman[184114]: 2025-10-09 10:10:09.455870506 +0000 UTC m=+0.046768427 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 10:10:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:09.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:09 compute-1 podman[184225]: 2025-10-09 10:10:09.779228971 +0000 UTC m=+0.031428255 container exec 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct 09 10:10:09 compute-1 podman[184225]: 2025-10-09 10:10:09.788824352 +0000 UTC m=+0.041023636 container exec_died 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct 09 10:10:09 compute-1 podman[184277]: 2025-10-09 10:10:09.922728959 +0000 UTC m=+0.034901943 container exec 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vendor=Red Hat, Inc., name=keepalived, vcs-type=git, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, release=1793, version=2.2.4, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived)
Oct 09 10:10:09 compute-1 podman[184277]: 2025-10-09 10:10:09.928040434 +0000 UTC m=+0.040213417 container exec_died 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, com.redhat.component=keepalived-container, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=keepalived, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4)
Oct 09 10:10:09 compute-1 sudo[183938]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:10 compute-1 sudo[184304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:10:10 compute-1 sudo[184304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:10 compute-1 sudo[184304]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:10:10.048 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:10:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:10:10.049 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:10:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:10:10.049 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:10:10 compute-1 sudo[184329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:10:10 compute-1 sudo[184329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:10 compute-1 sudo[184329]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:10 compute-1 nova_compute[162974]: 2025-10-09 10:10:10.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:10.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:10 compute-1 ceph-mon[9795]: pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.588737) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611589036, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1946, "num_deletes": 504, "total_data_size": 4265619, "memory_usage": 4338224, "flush_reason": "Manual Compaction"}
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611595310, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 2787071, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31264, "largest_seqno": 33205, "table_properties": {"data_size": 2779283, "index_size": 4154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 18929, "raw_average_key_size": 18, "raw_value_size": 2761688, "raw_average_value_size": 2764, "num_data_blocks": 179, "num_entries": 999, "num_filter_entries": 999, "num_deletions": 504, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004469, "oldest_key_time": 1760004469, "file_creation_time": 1760004611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 6600 microseconds, and 4725 cpu microseconds.
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.595350) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 2787071 bytes OK
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.595366) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.596337) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.596346) EVENT_LOG_v1 {"time_micros": 1760004611596343, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.596358) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4255813, prev total WAL file size 4255813, number of live WAL files 2.
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.596977) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323533' seq:72057594037927935, type:22 .. '6B7600353038' seq:0, type:0; will stop at (end)
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(2721KB)], [60(15MB)]
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611596999, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 19142106, "oldest_snapshot_seqno": -1}
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6319 keys, 13636523 bytes, temperature: kUnknown
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611625216, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 13636523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13595118, "index_size": 24527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 164408, "raw_average_key_size": 26, "raw_value_size": 13481647, "raw_average_value_size": 2133, "num_data_blocks": 975, "num_entries": 6319, "num_filter_entries": 6319, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760004611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.625368) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 13636523 bytes
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.625724) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 677.1 rd, 482.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 15.6 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(11.8) write-amplify(4.9) OK, records in: 7346, records dropped: 1027 output_compression: NoCompression
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.625737) EVENT_LOG_v1 {"time_micros": 1760004611625731, "job": 36, "event": "compaction_finished", "compaction_time_micros": 28271, "compaction_time_cpu_micros": 22016, "output_level": 6, "num_output_files": 1, "total_output_size": 13636523, "num_input_records": 7346, "num_output_records": 6319, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611626248, "job": 36, "event": "table_file_deletion", "file_number": 62}
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611628432, "job": 36, "event": "table_file_deletion", "file_number": 60}
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.596942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.628505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.628509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.628510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.628511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.628512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:10:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:11.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 09 10:10:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/298435994' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:10:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 09 10:10:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/298435994' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:10:12 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:10:12 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:10:12 compute-1 ceph-mon[9795]: pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:10:12 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:12 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:12 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:10:12 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:10:12 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:10:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/298435994' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:10:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.10:0/298435994' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:10:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:10:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:12.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:10:12 compute-1 nova_compute[162974]: 2025-10-09 10:10:12.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:13.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:14 compute-1 ceph-mon[9795]: pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:10:14 compute-1 sudo[184385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:10:14 compute-1 sudo[184385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:14 compute-1 sudo[184385]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:14.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:15 compute-1 sudo[184410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:10:15 compute-1 sudo[184410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:15 compute-1 sudo[184410]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:15.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:15 compute-1 nova_compute[162974]: 2025-10-09 10:10:15.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:16 compute-1 ceph-mon[9795]: pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:10:16 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:16 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:16.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:17.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:17 compute-1 nova_compute[162974]: 2025-10-09 10:10:17.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:18 compute-1 ceph-mon[9795]: pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:10:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:18.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:19 compute-1 podman[184439]: 2025-10-09 10:10:19.569268886 +0000 UTC m=+0.059144961 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct 09 10:10:19 compute-1 podman[184438]: 2025-10-09 10:10:19.579277586 +0000 UTC m=+0.075224235 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 10:10:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:19.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:20 compute-1 ceph-mon[9795]: pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:10:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:10:20 compute-1 nova_compute[162974]: 2025-10-09 10:10:20.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:20.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:21.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:22 compute-1 ceph-mon[9795]: pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:10:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:22.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:22 compute-1 nova_compute[162974]: 2025-10-09 10:10:22.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:23.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:24 compute-1 ceph-mon[9795]: pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:24.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:25.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:25 compute-1 nova_compute[162974]: 2025-10-09 10:10:25.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:26 compute-1 ceph-mon[9795]: pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:26.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:27.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:27 compute-1 nova_compute[162974]: 2025-10-09 10:10:27.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:28 compute-1 ceph-mon[9795]: pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:28 compute-1 podman[184478]: 2025-10-09 10:10:28.574656332 +0000 UTC m=+0.071693238 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 10:10:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:28.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:29.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:30 compute-1 ceph-mon[9795]: pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:30 compute-1 nova_compute[162974]: 2025-10-09 10:10:30.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:30.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:31.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:32 compute-1 ceph-mon[9795]: pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:32.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:32 compute-1 nova_compute[162974]: 2025-10-09 10:10:32.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:33.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:34 compute-1 ceph-mon[9795]: pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:34 compute-1 sudo[184505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:10:34 compute-1 sudo[184505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:34 compute-1 sudo[184505]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:34.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:10:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:35.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:35 compute-1 nova_compute[162974]: 2025-10-09 10:10:35.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:36 compute-1 ceph-mon[9795]: pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:36.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:37.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:37 compute-1 nova_compute[162974]: 2025-10-09 10:10:37.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:38 compute-1 sshd-session[184533]: Accepted publickey for zuul from 192.168.122.10 port 42036 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 10:10:38 compute-1 systemd-logind[798]: New session 44 of user zuul.
Oct 09 10:10:38 compute-1 systemd[1]: Created slice User Slice of UID 1000.
Oct 09 10:10:38 compute-1 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 09 10:10:38 compute-1 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 09 10:10:38 compute-1 systemd[1]: Starting User Manager for UID 1000...
Oct 09 10:10:38 compute-1 systemd[184537]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:10:38 compute-1 ceph-mon[9795]: pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:38 compute-1 systemd[184537]: Queued start job for default target Main User Target.
Oct 09 10:10:38 compute-1 systemd[184537]: Created slice User Application Slice.
Oct 09 10:10:38 compute-1 systemd[184537]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 10:10:38 compute-1 systemd[184537]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 10:10:38 compute-1 systemd[184537]: Reached target Paths.
Oct 09 10:10:38 compute-1 systemd[184537]: Reached target Timers.
Oct 09 10:10:38 compute-1 systemd[184537]: Starting D-Bus User Message Bus Socket...
Oct 09 10:10:38 compute-1 systemd[184537]: Starting Create User's Volatile Files and Directories...
Oct 09 10:10:38 compute-1 systemd[184537]: Finished Create User's Volatile Files and Directories.
Oct 09 10:10:38 compute-1 systemd[184537]: Listening on D-Bus User Message Bus Socket.
Oct 09 10:10:38 compute-1 systemd[184537]: Reached target Sockets.
Oct 09 10:10:38 compute-1 systemd[184537]: Reached target Basic System.
Oct 09 10:10:38 compute-1 systemd[184537]: Reached target Main User Target.
Oct 09 10:10:38 compute-1 systemd[184537]: Startup finished in 124ms.
Oct 09 10:10:38 compute-1 systemd[1]: Started User Manager for UID 1000.
Oct 09 10:10:38 compute-1 systemd[1]: Started Session 44 of User zuul.
Oct 09 10:10:38 compute-1 sshd-session[184533]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:10:38 compute-1 sudo[184553]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 09 10:10:38 compute-1 sudo[184553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:10:38 compute-1 podman[184587]: 2025-10-09 10:10:38.714578758 +0000 UTC m=+0.056345472 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid)
Oct 09 10:10:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:38.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:39.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:40 compute-1 ceph-mon[9795]: pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:40 compute-1 nova_compute[162974]: 2025-10-09 10:10:40.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:40.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:41 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 09 10:10:41 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3536492109' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:10:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:41.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:42 compute-1 ceph-mon[9795]: from='client.28538 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:42 compute-1 ceph-mon[9795]: from='client.28544 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:42 compute-1 ceph-mon[9795]: from='client.18693 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:42 compute-1 ceph-mon[9795]: from='client.18699 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:42 compute-1 ceph-mon[9795]: from='client.28559 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:42 compute-1 ceph-mon[9795]: from='client.28294 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:42 compute-1 ceph-mon[9795]: pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:42 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3536492109' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:10:42 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1945846219' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:10:42 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3566203389' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:10:42 compute-1 nova_compute[162974]: 2025-10-09 10:10:42.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:43.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:43 compute-1 ovs-vsctl[184859]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 09 10:10:44 compute-1 ceph-mon[9795]: pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:44 compute-1 virtqemud[162526]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 09 10:10:44 compute-1 virtqemud[162526]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 09 10:10:44 compute-1 virtqemud[162526]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 09 10:10:44 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: cache status {prefix=cache status} (starting...)
Oct 09 10:10:44 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:10:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:44.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:45 compute-1 lvm[185158]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 10:10:45 compute-1 lvm[185158]: VG ceph_vg0 finished
Oct 09 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: client ls {prefix=client ls} (starting...)
Oct 09 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:10:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 09 10:10:45 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2093065470' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:45.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: damage ls {prefix=damage ls} (starting...)
Oct 09 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:10:45 compute-1 nova_compute[162974]: 2025-10-09 10:10:45.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 09 10:10:45 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3532518201' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump loads {prefix=dump loads} (starting...)
Oct 09 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:10:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 09 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 09 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:10:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 09 10:10:46 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3048919801' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 09 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 09 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:10:46 compute-1 ceph-mon[9795]: pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:46 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1073987092' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:46 compute-1 ceph-mon[9795]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:46 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2093065470' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:46 compute-1 ceph-mon[9795]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:46 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3532518201' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:46 compute-1 ceph-mon[9795]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:46 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1403612783' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:10:46 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/262828082' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:10:46 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3048919801' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:10:46 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/4029739969' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 09 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 09 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:10:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 09 10:10:46 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2239107020' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:10:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:46.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:47 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: ops {prefix=ops} (starting...)
Oct 09 10:10:47 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:10:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 09 10:10:47 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2825927109' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.18735 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.18729 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.28610 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.28339 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.28625 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.28631 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.18777 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.28369 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.28655 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1462442156' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1160818251' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/895555260' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2170318210' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2239107020' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2182137390' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2825927109' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2256829641' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 09 10:10:47 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3791647089' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:47 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: session ls {prefix=session ls} (starting...)
Oct 09 10:10:47 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct 09 10:10:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:47.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:47 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: status {prefix=status} (starting...)
Oct 09 10:10:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 09 10:10:47 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1639166459' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:47 compute-1 nova_compute[162974]: 2025-10-09 10:10:47.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct 09 10:10:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4283214122' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 09 10:10:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/838622420' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct 09 10:10:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4281246886' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.18801 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.28402 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.28670 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.28432 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.18840 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.28706 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4091719402' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.18858 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.28462 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3843149761' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3791647089' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/536044603' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3435942827' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2847655966' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3107918271' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3743974822' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1639166459' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1972161551' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1803218710' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/4283214122' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/838622420' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/382053438' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/4281246886' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 09 10:10:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4117270880' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 09 10:10:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/45453420' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:10:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:48.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 09 10:10:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/717780337' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:10:49 compute-1 ceph-mon[9795]: from='client.28733 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2199241439' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/4117270880' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3590603568' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4279356677' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:10:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4256149710' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:10:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/45453420' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:10:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2856552148' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:10:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3505807830' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:10:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/717780337' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:10:49 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3524732949' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 09 10:10:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/489993274' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 09 10:10:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2208863461' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:10:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:49.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 09 10:10:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2519403560' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 09 10:10:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3875743926' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 09 10:10:50 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3632590972' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='client.18933 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='client.28537 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='client.18948 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/489993274' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/428440663' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2270317148' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2208863461' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2743014829' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2519403560' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3875743926' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1636638593' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3632590972' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3428916114' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2758321859' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 09 10:10:50 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/319661387' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:50 compute-1 podman[186089]: 2025-10-09 10:10:50.564006848 +0000 UTC m=+0.076462530 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 09 10:10:50 compute-1 podman[186090]: 2025-10-09 10:10:50.584246857 +0000 UTC m=+0.089336792 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 09 10:10:50 compute-1 nova_compute[162974]: 2025-10-09 10:10:50.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.437780 3 0.000259
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.437976 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000065 1 0.000114
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000049
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 225)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:35.108940+0000 osd.0 (osd.0) 224 : cluster [DBG] 6.e scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:35.123469+0000 osd.0 (osd.0) 225 : cluster [DBG] 6.e scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 4562944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:06.649732+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 227 sent 225 num 2 unsent 2 sending 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:36.096817+0000 osd.0 (osd.0) 226 : cluster [DBG] 6.5 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:36.114486+0000 osd.0 (osd.0) 227 : cluster [DBG] 6.5 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 124 handle_osd_map epochs [123,124], i have 124, src has [1,124]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000506 4 0.000053
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000616 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 227)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:36.096817+0000 osd.0 (osd.0) 226 : cluster [DBG] 6.5 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:36.114486+0000 osd.0 (osd.0) 227 : cluster [DBG] 6.5 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 4554752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920632 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:07.649941+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 229 sent 227 num 2 unsent 2 sending 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:37.143355+0000 osd.0 (osd.0) 228 : cluster [DBG] 6.2 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:37.153996+0000 osd.0 (osd.0) 229 : cluster [DBG] 6.2 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.929512 5 0.000230
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000052 1 0.000056
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000620 1 0.000022
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.014229 2 0.000090
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 229)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:37.143355+0000 osd.0 (osd.0) 228 : cluster [DBG] 6.2 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:37.153996+0000 osd.0 (osd.0) 229 : cluster [DBG] 6.2 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.254469 1 0.000142
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.199120 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.199756 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.199779 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730270386s) [1] async=[1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 40'1059 active pruub 302.409027100s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] exit Reset 0.000089 1 0.000138
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] enter Started
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] enter Start
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] enter Started/Stray
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4521984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 125 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:08.650074+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 231 sent 229 num 2 unsent 2 sending 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:38.131309+0000 osd.0 (osd.0) 230 : cluster [DBG] 6.a scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:38.141838+0000 osd.0 (osd.0) 231 : cluster [DBG] 6.a scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4521984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 231)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:38.131309+0000 osd.0 (osd.0) 230 : cluster [DBG] 6.a scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:38.141838+0000 osd.0 (osd.0) 231 : cluster [DBG] 6.a scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:09.650201+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 233 sent 231 num 2 unsent 2 sending 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:39.145733+0000 osd.0 (osd.0) 232 : cluster [DBG] 6.3 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:39.163525+0000 osd.0 (osd.0) 233 : cluster [DBG] 6.3 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _renew_subs
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.861375 6 0.000071
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000940 2 0.000043
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 DELETING pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.039841 2 0.000114
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.040842 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.902274 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 4513792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 233)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:39.145733+0000 osd.0 (osd.0) 232 : cluster [DBG] 6.3 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:39.163525+0000 osd.0 (osd.0) 233 : cluster [DBG] 6.3 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:10.650339+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 235 sent 233 num 2 unsent 2 sending 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:40.139145+0000 osd.0 (osd.0) 234 : cluster [DBG] 10.1a scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:40.186147+0000 osd.0 (osd.0) 235 : cluster [DBG] 10.1a scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 4513792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 235)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:40.139145+0000 osd.0 (osd.0) 234 : cluster [DBG] 10.1a scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:40.186147+0000 osd.0 (osd.0) 235 : cluster [DBG] 10.1a scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc60f000/0x0/0x4ffc00000, data 0x154aa9/0x1fb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 77.464063 170 0.000532
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 77.465815 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 78.470504 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 78.470528 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538806915s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 active pruub 300.480133057s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] exit Reset 0.000078 1 0.000123
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] enter Started
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] enter Start
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] enter Started/Stray
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:11.650504+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 237 sent 235 num 2 unsent 2 sending 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:41.125652+0000 osd.0 (osd.0) 236 : cluster [DBG] 10.1d scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:41.153898+0000 osd.0 (osd.0) 237 : cluster [DBG] 10.1d scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4505600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929125 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 237)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:41.125652+0000 osd.0 (osd.0) 236 : cluster [DBG] 10.1d scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:41.153898+0000 osd.0 (osd.0) 237 : cluster [DBG] 10.1d scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.770187 3 0.000226
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.770216 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000058 1 0.000081
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000035
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:12.650671+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 239 sent 237 num 2 unsent 2 sending 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:42.125609+0000 osd.0 (osd.0) 238 : cluster [DBG] 10.9 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:42.153846+0000 osd.0 (osd.0) 239 : cluster [DBG] 10.9 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4497408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 239)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:42.125609+0000 osd.0 (osd.0) 238 : cluster [DBG] 10.9 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:42.153846+0000 osd.0 (osd.0) 239 : cluster [DBG] 10.9 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 128 handle_osd_map epochs [128,129], i have 129, src has [1,129]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002967 4 0.000048
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003062 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:13.650847+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 241 sent 239 num 2 unsent 2 sending 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:43.103288+0000 osd.0 (osd.0) 240 : cluster [DBG] 10.c scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:43.131563+0000 osd.0 (osd.0) 241 : cluster [DBG] 10.c scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.287339211s of 10.341490746s, submitted: 63
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f(unlocked)] enter Initial
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=0 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=0 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000024
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000106 1 0.000033
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000143 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.917700 5 0.000274
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000080 1 0.000041
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000345 1 0.000023
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035394 2 0.000101
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4497408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 241)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:43.103288+0000 osd.0 (osd.0) 240 : cluster [DBG] 10.c scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:43.131563+0000 osd.0 (osd.0) 241 : cluster [DBG] 10.c scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 129 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.100369 2 0.000045
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.100600 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.064024 1 0.000046
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.017850 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.020940 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.020967 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.100759 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.899759293s) [2] async=[2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 40'1059 active pruub 308.632507324s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000458 1 0.000730
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000102 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] exit Reset 0.004411 1 0.004537
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] enter Started
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] enter Start
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] exit Start 0.000009 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] enter Started/Stray
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc607000/0x0/0x4ffc00000, data 0x15ac76/0x204000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:14.650986+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 243 sent 241 num 2 unsent 2 sending 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:44.055789+0000 osd.0 (osd.0) 242 : cluster [DBG] 10.6 deep-scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:44.087089+0000 osd.0 (osd.0) 243 : cluster [DBG] 10.6 deep-scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 4489216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 243)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:44.055789+0000 osd.0 (osd.0) 242 : cluster [DBG] 10.6 deep-scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:44.087089+0000 osd.0 (osd.0) 243 : cluster [DBG] 10.6 deep-scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x15cc5f/0x207000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 130 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.207355 5 0.000520
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.203806 6 0.000178
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001910 2 0.000149
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003054 4 0.000130
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000064 1 0.000036
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 DELETING pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.042146 2 0.000241
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.044125 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.248037 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.070374 1 0.000064
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:15.651601+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 245 sent 243 num 2 unsent 2 sending 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:45.026245+0000 osd.0 (osd.0) 244 : cluster [DBG] 10.a scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:45.068603+0000 osd.0 (osd.0) 245 : cluster [DBG] 10.a scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.466371 1 0.000036
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.539968 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 1.747725 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000078 1 0.000113
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000469 2 0.000032
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=32
Oct 09 10:10:50 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=32
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001068 2 0.000055
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 245)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:45.026245+0000 osd.0 (osd.0) 244 : cluster [DBG] 10.a scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:45.068603+0000 osd.0 (osd.0) 245 : cluster [DBG] 10.a scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 132 ms_handle_reset con 0x560c9c8a1800 session 0x560c9d630d20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:16.651802+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 247 sent 245 num 2 unsent 2 sending 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:45.994370+0000 osd.0 (osd.0) 246 : cluster [DBG] 10.0 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:46.033184+0000 osd.0 (osd.0) 247 : cluster [DBG] 10.0 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 132 handle_osd_map epochs [132,133], i have 133, src has [1,133]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002487 2 0.000117
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004414 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002016 4 0.001053
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952396 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 247)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:45.994370+0000 osd.0 (osd.0) 246 : cluster [DBG] 10.0 scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:46.033184+0000 osd.0 (osd.0) 247 : cluster [DBG] 10.0 scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:17.651995+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 249 sent 247 num 2 unsent 2 sending 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:46.955262+0000 osd.0 (osd.0) 248 : cluster [DBG] 10.d scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:46.993836+0000 osd.0 (osd.0) 249 : cluster [DBG] 10.d scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 249)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:46.955262+0000 osd.0 (osd.0) 248 : cluster [DBG] 10.d scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:46.993836+0000 osd.0 (osd.0) 249 : cluster [DBG] 10.d scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:18.652114+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  log_queue is 2 last_log 251 sent 249 num 2 unsent 2 sending 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:47.939990+0000 osd.0 (osd.0) 250 : cluster [DBG] 10.b scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  will send 2025-10-09T09:39:47.964665+0000 osd.0 (osd.0) 251 : cluster [DBG] 10.b scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client handle_log_ack log(last 251)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:47.939990+0000 osd.0 (osd.0) 250 : cluster [DBG] 10.b scrub starts
Oct 09 10:10:50 compute-1 ceph-osd[7514]: log_client  logged 2025-10-09T09:39:47.964665+0000 osd.0 (osd.0) 251 : cluster [DBG] 10.b scrub ok
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:19.652284+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 4415488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:20.652404+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4407296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:21.652549+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953544 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:22.652660+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:23.652804+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:24.652956+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:25.653111+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:26.653242+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a066000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.116673470s of 13.153404236s, submitted: 45
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4382720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953676 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:27.653378+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4382720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:28.653541+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 4374528 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:29.653737+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 4472832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:30.653834+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 4472832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:31.653928+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 4464640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954348 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:32.654012+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a1000 session 0x560c9d2dc5a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9b7d1800 session 0x560c9d8512c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 4464640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:33.654143+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9cbe2c00 session 0x560c9c5994a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9cf88000 session 0x560c9d20c960
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 4456448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:34.654311+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 4456448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:35.654457+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 4448256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:36.654584+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 4448256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954348 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:37.654760+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:38.654861+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:39.655773+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:40.655931+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:41.656130+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.990660667s of 14.992744446s, submitted: 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 4415488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954216 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:42.656297+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4407296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:43.656441+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a0000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4407296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:44.656552+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a0800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:45.656699+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:46.656832+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954480 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:47.656958+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:48.657068+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:49.657169+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4382720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:50.657290+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab7400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 4366336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:51.657407+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 4366336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955992 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:52.657556+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 4358144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:53.657665+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 4358144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:54.657816+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9a066000 session 0x560c9d208d20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4349952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:55.658001+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.551061630s of 13.556247711s, submitted: 4
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4349952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:56.658144+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4341760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955401 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:57.658248+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4341760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:58.658348+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4341760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:59.658513+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 4333568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:00.658632+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 4333568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:01.658780+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 4325376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:02.658927+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 4325376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:03.659061+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 4325376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:04.659154+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 4317184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:05.659269+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 4308992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:06.659372+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4300800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:07.659513+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4300800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:08.659640+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4300800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:09.659773+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 4292608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:10.659929+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 4292608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:11.660030+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 4284416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:12.660160+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 4284416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:13.660277+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 4284416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:14.660425+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 4276224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:15.660581+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 4276224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:16.660740+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 4268032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:17.660840+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 4268032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:18.660935+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4259840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:19.661027+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4259840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:20.661128+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4259840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:21.661234+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.443452835s of 26.446563721s, submitted: 3
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4243456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:22.661335+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4243456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:23.661432+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4243456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:24.661537+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 4235264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:25.661652+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 4235264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:26.661775+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 4227072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:27.661887+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 4227072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:28.661979+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4218880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:29.662119+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4218880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:30.662270+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4218880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:31.662412+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:32.662527+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:33.662636+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:34.662727+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:35.662829+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:36.662931+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:37.663029+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:38.663120+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:39.663221+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:40.663337+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:41.663431+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 4194304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:42.663543+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 4194304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:43.663673+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 4194304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:44.663802+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 4186112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:45.663938+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 4186112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:46.664078+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4177920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:47.664204+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4177920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:48.664297+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4177920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:49.664390+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 4169728 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:50.664634+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 4169728 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:51.664731+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 4161536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:52.664819+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 4161536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:53.664925+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4153344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:54.665055+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4153344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:55.665225+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4153344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:56.665354+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4145152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:57.665483+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4145152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:58.665606+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4145152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:59.665723+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 4136960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:00.665865+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 4136960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:01.666021+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9dab7400 session 0x560c9d20f860
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a0800 session 0x560c9cf7a960
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4128768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:02.666135+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4128768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:03.666239+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4128768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:04.666346+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 4120576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:05.666527+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 4120576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:06.666656+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 4112384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:07.666818+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 4112384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:08.666950+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4104192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:09.667126+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4104192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:10.667232+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4104192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:11.667381+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 4096000 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9b7d1800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 50.453056335s of 50.454822540s, submitted: 1
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:12.667484+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 4096000 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:13.667630+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 4087808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:14.667749+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 4087808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:15.667884+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 4079616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:16.668019+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 4079616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:17.668152+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 4079616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:18.668280+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a1000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 4071424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:19.668383+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 4071424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:20.668532+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 4063232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:21.668635+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 4063232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:22.668749+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 4055040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:23.668859+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 4055040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:24.668962+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 4055040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:25.669089+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 4046848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:26.669210+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 4046848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:27.669307+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 4038656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:28.669402+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 4038656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.000545502s of 17.003219604s, submitted: 2
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:29.669496+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 4030464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:30.669593+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 4030464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:31.669704+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 4030464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:32.669842+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 4022272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:33.669948+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 4022272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:34.670050+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 4022272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:35.670185+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 4014080 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:36.670290+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 4014080 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:37.670390+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 4005888 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:38.670485+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 4005888 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:39.670578+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 3997696 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:40.670721+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 3997696 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:41.670832+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 3989504 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:42.670937+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 3989504 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:43.671026+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 3981312 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:44.671176+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 3964928 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:45.671293+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 3964928 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:46.671607+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3956736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:47.671710+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3956736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:48.671810+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3956736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:49.671902+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 3948544 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:50.672006+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 3948544 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:51.672131+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 3940352 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:52.672249+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d2dd0e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a0000 session 0x560c9d20fa40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 3940352 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:53.672362+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:54.672495+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:55.672669+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:56.672813+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:57.672955+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:58.673130+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:59.673265+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:00.673366+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 3923968 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:01.673474+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 3923968 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:02.673623+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 3915776 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:03.673738+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 3915776 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9cbe2c00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.123451233s of 34.124847412s, submitted: 1
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:04.673877+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 3907584 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:05.674001+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 3907584 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:06.674146+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3899392 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:07.674254+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3899392 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:08.674373+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3899392 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:09.674477+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 3891200 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:10.674581+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 3883008 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:11.674684+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 3883008 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:12.674994+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 3874816 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:13.675117+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 3874816 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:14.675318+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3866624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:15.675465+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3866624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:16.675567+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3866624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:17.675704+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 3858432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:18.675813+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 3858432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.923893929s of 14.924749374s, submitted: 1
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:19.675931+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3850240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:20.676061+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3850240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:21.676159+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3850240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:22.676253+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 3842048 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:23.676396+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 3842048 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:24.676508+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 3833856 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:25.676637+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 3833856 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:26.676743+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3825664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:27.676859+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3825664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:28.676982+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3825664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:29.677132+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 3817472 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:30.677266+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 3809280 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:31.677357+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3801088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:32.677451+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3801088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:33.677552+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3801088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:34.677656+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 3792896 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:35.677718+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 3792896 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:36.677814+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 3784704 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:37.677923+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 3784704 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:38.678014+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3776512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:39.678143+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3776512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:40.678229+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3776512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:41.678322+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 3768320 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:42.678412+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 3768320 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:43.678513+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3760128 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:44.678634+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3760128 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:45.678763+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3760128 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:46.679008+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 3751936 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:47.679116+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 3751936 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:48.679220+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3743744 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:49.680003+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3743744 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:50.680107+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 3735552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:51.680224+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 3735552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:52.680355+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 3735552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:53.680491+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 3727360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:54.680627+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 3727360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:55.680760+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 3710976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:56.680866+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 3710976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:57.680999+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 3710976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:58.681108+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 3702784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:59.681234+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 3702784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:00.681333+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 3694592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:01.681447+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 3694592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:02.681541+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 3686400 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:03.681651+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 3686400 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:04.681745+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 3678208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:05.681848+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 3678208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:06.681943+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 3678208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:07.682044+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 3670016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:08.682140+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 3670016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:09.682235+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 3670016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:10.682334+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 3661824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:11.682444+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 3661824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:12.682546+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 3661824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:13.682650+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 3653632 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:14.682724+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 3653632 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:15.682831+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 3637248 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:16.682926+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 3637248 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:17.683026+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 3629056 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:18.683422+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 3629056 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:19.683534+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 3620864 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:20.683653+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 3620864 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:21.683735+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 3620864 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:22.683869+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 3612672 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:23.684006+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 3612672 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:24.684120+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 3604480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:25.684248+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 3604480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:26.684389+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 3596288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:27.684494+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 3596288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:28.684603+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 3596288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:29.684726+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 3588096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:30.684828+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:31.684925+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:32.685031+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:33.685122+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:34.685224+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 3571712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:35.685346+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 3571712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:36.685449+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3563520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:37.685551+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3563520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:38.685662+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3555328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:39.685788+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3555328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:40.685894+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3538944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:41.686025+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3538944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:42.686157+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3538944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:43.686257+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 3530752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:44.686378+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 3530752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:45.686530+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3522560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:46.686647+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3522560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:47.686786+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 3514368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:48.686960+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 3514368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:49.687066+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 3514368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:50.687220+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3506176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:51.687331+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3506176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:52.687446+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3497984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:53.687558+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3497984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:54.687766+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3497984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:55.687895+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 3489792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:56.688002+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 3489792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:57.688101+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3481600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:58.688197+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3481600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:59.688295+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3481600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:00.688405+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3465216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:01.688503+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3465216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:02.688626+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:03.688751+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:04.688855+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:05.688973+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:06.689090+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:07.689208+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:08.689346+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3440640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:09.689462+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3440640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:10.689902+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 3432448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:11.689993+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 3432448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:12.690106+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3424256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:13.690246+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3416064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:14.690379+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3416064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:15.690526+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3407872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:16.690617+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3407872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:17.690739+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 3399680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:18.690857+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 3399680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:19.691010+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3391488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:20.691145+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3391488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Cumulative writes: 8413 writes, 33K keys, 8413 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                          Cumulative WAL: 8413 writes, 1875 syncs, 4.49 writes per sync, written: 0.02 GB, 0.04 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 8413 writes, 33K keys, 8413 commit groups, 1.0 writes per commit group, ingest: 21.18 MB, 0.04 MB/s
                                          Interval WAL: 8413 writes, 1875 syncs, 4.49 writes per sync, written: 0.02 GB, 0.04 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
                                          
                                          ** Compaction Stats [m-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-0] **
                                          
                                          ** Compaction Stats [m-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-1] **
                                          
                                          ** Compaction Stats [m-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-2] **
                                          
                                          ** Compaction Stats [p-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-0] **
                                          
                                          ** Compaction Stats [p-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-1] **
                                          
                                          ** Compaction Stats [p-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-2] **
                                          
                                          ** Compaction Stats [O-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-0] **
                                          
                                          ** Compaction Stats [O-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-1] **
                                          
                                          ** Compaction Stats [O-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-2] **
                                          
                                          ** Compaction Stats [L] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [L] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [L] **
                                          
                                          ** Compaction Stats [P] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [P] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [P] **
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:21.691276+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:22.691414+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:23.691507+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:24.691599+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3317760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:25.691714+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3309568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:26.691884+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:27.692050+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:28.692213+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:29.692377+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3293184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:30.692538+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3293184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:31.692661+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:32.692806+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:33.692950+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:34.693109+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3276800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:35.693260+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3276800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:36.693392+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 3268608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:37.693497+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 3268608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:38.693633+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:39.693780+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:40.693931+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:41.694061+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 3252224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:42.694181+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 3252224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:43.694324+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3244032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:44.694455+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3244032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:45.694584+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3235840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:46.694702+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3227648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:47.694829+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3227648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:48.694976+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:49.695082+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:50.695190+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:51.695324+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:52.695416+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:53.695535+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:54.695647+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3203072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:55.695809+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3203072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:56.695922+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:57.696056+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:58.696201+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:59.696342+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:00.696468+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:01.696607+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3178496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:02.696739+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3178496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:03.696853+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 3170304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:04.696996+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 3170304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:05.697175+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:06.697301+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:07.697416+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:08.697518+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 3153920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:09.697613+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 3153920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:10.697727+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:11.697817+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:12.697915+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:13.698017+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3129344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:14.698123+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3129344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:15.698239+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:16.698347+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:17.698451+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:18.698543+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:19.698643+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:20.698739+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:21.698839+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3104768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:22.698944+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3104768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:23.699061+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 3096576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:24.699196+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 3096576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:25.699325+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:26.699429+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:27.699535+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:28.699651+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 3080192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:29.699731+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 3080192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:30.699825+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 3063808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:31.699920+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 3063808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:32.700008+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 3055616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:33.700115+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 3055616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:34.700227+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 3055616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:35.700342+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 3047424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:36.700450+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 3047424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:37.700615+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:38.700779+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:39.700891+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:40.701029+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 3031040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:41.701139+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 3031040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:42.701282+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 3022848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:43.701381+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 3022848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:44.701481+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 3014656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 206.934524536s of 206.935745239s, submitted: 1
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:45.701582+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [1])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 2695168 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:46.701674+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:47.701790+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:48.701882+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:49.702062+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:50.702169+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:51.702284+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:52.702399+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:53.702505+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:54.702619+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:55.702772+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:56.702872+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:57.702970+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:58.703093+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:59.703194+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:00.703316+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:01.703442+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:02.703552+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:03.703679+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:04.703790+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:05.703947+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:06.704062+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:07.704160+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:08.704310+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:09.704473+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:10.704581+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:11.704700+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:12.704802+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:13.704971+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:14.705138+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:15.705262+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:16.705362+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:17.705493+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:18.705586+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:19.705713+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:20.705811+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:21.705916+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:22.706019+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:23.706124+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:24.706225+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:25.706357+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:26.706456+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:27.706560+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:28.706655+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:29.706723+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:30.706825+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:31.706938+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:32.707038+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:33.707137+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:34.707238+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:35.707380+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:36.707494+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:37.707609+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:38.707757+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:39.707863+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:40.708004+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:41.708114+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:42.708224+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:43.708315+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:44.708406+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:45.708531+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:46.708620+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:47.708721+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:48.708830+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:49.708928+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:50.709025+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:51.709126+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:52.709223+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:53.709327+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:54.709431+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:55.709546+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:56.709650+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:57.709752+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:58.709884+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:59.709996+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:00.710096+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:01.710198+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:02.710293+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:03.710397+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:04.710516+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:05.710648+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:06.710795+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:07.710899+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:08.710992+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:09.711088+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:10.711190+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:11.711322+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:12.711423+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:13.711525+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:14.711677+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:15.711813+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:16.711934+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:17.712100+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:18.712223+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:19.712400+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:20.712530+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:21.712669+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:22.712742+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:23.712844+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:24.712961+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:25.713087+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:26.713229+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:27.713322+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:28.713420+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:29.713510+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:30.713611+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:31.713734+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:32.713834+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:33.713929+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:34.714024+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:35.714139+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:36.714241+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:37.714344+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:38.714451+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:39.714559+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:40.714744+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:41.714850+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:42.714944+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:43.715039+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:44.715136+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:45.715251+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:46.715360+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:47.715473+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:48.715601+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:49.715708+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:50.715829+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:51.715945+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:52.716041+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:53.716156+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:54.716279+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:55.716394+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:56.716494+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:57.716585+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:58.716695+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:59.716808+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:00.716911+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:01.717012+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:02.717114+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:03.717250+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:04.717375+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:05.717504+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:06.717619+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:07.717735+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:08.717827+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:09.717929+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:10.718040+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:11.718153+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:12.718261+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:13.718356+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:14.718447+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:15.718577+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:16.718680+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:17.718835+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:18.718958+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:19.719055+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:20.719145+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:21.719235+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:22.719581+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:23.719709+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:24.719802+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:25.719915+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:26.720012+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:27.720103+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:28.720197+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:29.720291+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:30.720396+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:31.720499+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:32.720613+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:33.720783+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:34.720918+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:35.721075+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:36.721238+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:37.721348+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:38.721446+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:39.721570+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:40.721730+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:41.721837+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:42.721927+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:43.722017+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:44.722117+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:45.722240+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:46.722353+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:47.722458+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:48.722589+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:49.722730+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:50.722879+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:51.723023+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:52.723164+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:53.723296+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:54.723438+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:55.723611+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:56.723730+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:57.723848+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:58.723989+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:59.724101+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:00.724202+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:01.724671+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:02.724786+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:03.724883+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:04.725006+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:05.725116+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:06.725214+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:07.725325+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:08.725428+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:09.725561+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:10.725678+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:11.725788+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:12.725890+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:13.725986+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:14.726081+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:15.726205+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:16.726314+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:17.726424+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:18.726526+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:19.726639+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:20.726731+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:21.726841+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:22.726946+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:23.727042+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:24.727183+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:25.727305+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:26.727402+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:27.727492+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:28.727588+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:29.727701+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:30.727801+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:31.727925+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:32.728045+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:33.728163+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:34.728306+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:35.728450+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:36.728589+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:37.728710+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:38.728811+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:39.728924+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:40.729026+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:41.729130+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:42.729231+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:43.729345+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:44.729450+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:45.729583+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:46.729713+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:47.729813+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:48.729914+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:49.730025+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:50.730134+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:51.730245+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:52.730332+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:53.730471+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:54.730560+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:55.730709+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:56.730798+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:57.730903+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:58.730996+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:59.731085+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:00.731177+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:01.731266+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:02.731377+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:03.731603+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:04.731737+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:05.731854+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:06.732010+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:07.732130+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:08.732249+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:09.732350+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:10.732454+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:11.732565+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:12.732659+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:13.732720+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:14.732870+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:15.733010+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:16.733138+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:17.733246+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:18.733357+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:19.733525+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:20.733678+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:21.733845+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:22.733984+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:23.734115+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:24.734255+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:25.734372+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:26.734516+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:27.734657+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:28.734821+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:29.734968+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:30.735125+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:31.735280+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:32.735426+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:33.735575+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:34.735716+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:35.735876+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:36.736018+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:37.736187+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:38.736325+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:39.736447+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:40.736563+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:41.736680+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:42.736824+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:43.736937+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:44.737086+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:45.737236+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:46.737384+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:47.737520+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [1])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:48.737642+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:49.737758+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:50.737898+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:51.738031+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:52.738190+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:53.738339+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:54.738481+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:55.738622+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:56.738798+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:57.738915+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:58.739018+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:59.739173+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:00.739286+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:01.739430+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:02.739573+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:03.739709+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:04.739855+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:05.740011+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:06.740143+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:07.740285+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:08.740426+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:09.740562+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:10.740671+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:11.740735+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:12.740875+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:13.740997+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:14.741138+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:15.741292+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:16.741416+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:17.741551+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:18.741718+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:19.741890+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:20.742010+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9ade0c00 session 0x560c9b978780
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ade0c00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:21.742168+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:22.742320+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:23.742469+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:24.742612+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:25.742831+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:26.742992+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:27.743143+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:28.743290+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:29.743425+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:30.743560+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:31.743766+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:32.743932+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:33.744071+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:34.744255+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:35.744425+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:36.744581+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:37.744720+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:38.744859+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:39.745006+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:40.745159+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:41.745309+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:42.745425+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:43.745562+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:44.745715+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:45.745872+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:46.746005+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:47.746132+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:48.746279+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:49.746412+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:50.746573+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:51.746746+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:52.746881+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:53.747047+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:54.747201+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:55.747353+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:56.747523+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:57.747666+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:58.747820+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:59.747969+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:00.748114+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:01.748248+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:02.748390+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:03.748538+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:04.748714+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:05.748904+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:06.749035+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:07.749169+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:08.749310+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:09.749444+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:10.749540+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:11.749719+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:12.749871+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:13.749997+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:14.750114+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:15.750271+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:16.750412+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:17.750548+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:18.750654+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:19.750789+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:20.750892+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:21.751036+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:22.751180+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:23.751323+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:24.751471+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:25.751635+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:26.751770+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:27.751884+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:28.752026+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:29.752183+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:30.752331+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:31.752487+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:32.752628+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:33.752772+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:34.752919+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:35.753085+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:36.753231+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:37.753349+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:38.753487+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:39.753633+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:40.753778+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:41.753910+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:42.754022+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:43.754158+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:44.754305+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:45.754469+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:46.754600+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:47.754747+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:48.754878+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:49.755052+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:50.755196+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:51.755343+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:52.755481+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:53.755597+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:54.755748+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:55.755912+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:56.756058+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:57.756207+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:58.756345+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:59.756472+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:00.756602+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:01.756750+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:02.756866+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:03.757027+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:04.757167+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:05.757348+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:06.757475+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:07.757592+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:08.757715+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:09.757841+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:10.757969+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:11.758094+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:12.758220+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:13.758343+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:14.758478+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:15.758630+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:16.758775+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:17.758912+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:18.759043+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:19.759178+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:20.759357+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:21.759504+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:22.759684+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:23.759896+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:24.760067+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:25.760243+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:26.760437+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:27.760612+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:28.760789+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:29.760926+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:30.761092+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:31.761229+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:32.761374+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:33.761507+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:34.761645+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:35.761825+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:36.761957+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:37.762095+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:38.762224+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:39.762328+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:40.762438+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:41.762550+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:42.762726+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:43.762848+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:44.763008+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:45.763180+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:46.763321+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:47.763441+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:48.763613+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:49.763804+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:50.763943+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:51.764060+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:52.764201+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:53.764336+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:54.764486+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:55.764676+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:56.764854+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:57.765003+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:58.765133+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:59.765259+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:00.765534+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:01.765742+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 2400256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:02.765872+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 496.980529785s of 497.169708252s, submitted: 379
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc5f7000/0x0/0x4ffc00000, data 0x164ccd/0x214000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 2400256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc5f7000/0x0/0x4ffc00000, data 0x164ccd/0x214000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:03.766036+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _renew_subs
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 1310720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968373 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:04.766182+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 136 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b4752c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 1294336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab7400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:05.766316+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _renew_subs
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 137 ms_handle_reset con 0x560c9dab7400 session 0x560c9c598780
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 16670720 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:06.766465+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 16621568 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:07.766599+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 16621568 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:08.766747+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fb179000/0x0/0x4ffc00000, data 0x15db086/0x1690000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86138880 unmapped: 16588800 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114638 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:09.766896+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86138880 unmapped: 16588800 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:10.767031+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:11.767177+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:12.767318+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:13.767466+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:14.767572+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:15.767716+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:16.767823+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:17.767971+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:18.768122+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:19.768227+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:20.768344+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Cumulative writes: 9273 writes, 35K keys, 9273 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                          Cumulative WAL: 9273 writes, 2281 syncs, 4.07 writes per sync, written: 0.02 GB, 0.02 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 860 writes, 1592 keys, 860 commit groups, 1.0 writes per commit group, ingest: 0.67 MB, 0.00 MB/s
                                          Interval WAL: 860 writes, 406 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
                                          
                                          ** Compaction Stats [m-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-0] **
                                          
                                          ** Compaction Stats [m-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-1] **
                                          
                                          ** Compaction Stats [m-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-2] **
                                          
                                          ** Compaction Stats [p-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.4      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-0] **
                                          
                                          ** Compaction Stats [p-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-1] **
                                          
                                          ** Compaction Stats [p-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-2] **
                                          
                                          ** Compaction Stats [O-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-0] **
                                          
                                          ** Compaction Stats [O-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-1] **
                                          
                                          ** Compaction Stats [O-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.2      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c992729b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-2] **
                                          
                                          ** Compaction Stats [L] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [L] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [L] **
                                          
                                          ** Compaction Stats [P] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [P] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [P] **
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:21.768452+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:22.768590+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:23.768705+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:24.768853+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:25.768990+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:26.769114+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:27.769222+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:28.769361+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:29.769468+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:30.769598+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:31.769711+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:32.769853+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:33.769949+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 09 10:10:50 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1031980098' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:34.770080+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:35.770207+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:36.770341+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:37.770442+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:38.770545+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:39.770644+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:40.770778+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:41.770869+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d851e00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d81d0e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9cf88000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9cf88000 session 0x560c9d81cd20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:42.771005+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.565647125s of 40.628654480s, submitted: 75
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d81cb40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d2912c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:43.771113+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b907e00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab7400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 16531456 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114524 data_alloc: 218103808 data_used: 286720
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:44.771235+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _renew_subs
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 16531456 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:45.771348+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _renew_subs
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9dab7400 session 0x560c9df7c5a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9daae000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9daae000 session 0x560c9cb93c20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d815a40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d5b65a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b9790e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 15712256 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:46.771480+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fa989000/0x0/0x4ffc00000, data 0x1dc82a7/0x1e82000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 15712256 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:47.771581+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab7400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9dab7400 session 0x560c9cecfe00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fa989000/0x0/0x4ffc00000, data 0x1dc82a7/0x1e82000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 15712256 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:48.771727+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9daad000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9daad000 session 0x560c9d291c20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 15712256 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187648 data_alloc: 218103808 data_used: 286720
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:49.771840+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d20f860
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9aa9f800 session 0x560c9b88a1e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 15376384 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab7400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:50.771964+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 15278080 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:51.772058+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:52.772167+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:53.772298+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251296 data_alloc: 218103808 data_used: 8495104
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:54.772438+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:55.772565+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:56.772717+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:57.772812+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:58.772939+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251296 data_alloc: 218103808 data_used: 8495104
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:59.773053+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:00.773183+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.162794113s of 18.220115662s, submitted: 57
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:01.773280+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 1507328 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:02.773411+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 1064960 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:03.773542+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 1064960 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322664 data_alloc: 218103808 data_used: 9072640
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:04.773674+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fe8000/0x0/0x4ffc00000, data 0x25c8289/0x2684000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 1064960 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:05.773855+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 966656 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:06.773987+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 966656 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:07.774120+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 966656 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:08.774265+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 966656 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322664 data_alloc: 218103808 data_used: 9072640
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:09.774385+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fe8000/0x0/0x4ffc00000, data 0x25c8289/0x2684000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102842368 unmapped: 933888 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:10.774516+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102842368 unmapped: 933888 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:11.774644+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102842368 unmapped: 933888 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:12.774774+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fe8000/0x0/0x4ffc00000, data 0x25c8289/0x2684000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:13.774907+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322664 data_alloc: 218103808 data_used: 9072640
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:14.775359+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:15.775512+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:16.775631+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fe8000/0x0/0x4ffc00000, data 0x25c8289/0x2684000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:17.775729+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:18.775842+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322664 data_alloc: 218103808 data_used: 9072640
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:19.775951+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab0e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.730630875s of 18.770584106s, submitted: 77
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab0e400 session 0x560c9d815680
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101392384 unmapped: 2383872 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9b9781e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:20.776050+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58d800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58d800 session 0x560c9b8872c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58d800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58d800 session 0x560c9cb93a40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9df7dc20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9aaf50e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab0e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab0e400 session 0x560c9df7da40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:21.776159+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8575000/0x0/0x4ffc00000, data 0x303b289/0x30f7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:22.776304+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8575000/0x0/0x4ffc00000, data 0x303b289/0x30f7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8575000/0x0/0x4ffc00000, data 0x303b289/0x30f7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:23.776424+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9d20d0e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393552 data_alloc: 218103808 data_used: 9076736
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:24.776559+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9a89cb40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:25.776680+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9dae4f00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9cd65a40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101515264 unmapped: 13811712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:26.776846+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8575000/0x0/0x4ffc00000, data 0x303b289/0x30f7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab0e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58d800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101515264 unmapped: 13811712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:27.776986+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 9478144 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:28.777156+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 4595712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464002 data_alloc: 234881024 data_used: 18825216
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:29.777278+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8550000/0x0/0x4ffc00000, data 0x305f299/0x311c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 4595712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:30.777401+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 4595712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:31.777497+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 4595712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:32.777634+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.763611794s of 12.791978836s, submitted: 22
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 4505600 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:33.777744+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 4505600 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464474 data_alloc: 234881024 data_used: 18825216
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:34.777851+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f854e000/0x0/0x4ffc00000, data 0x3060299/0x311d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 4472832 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:35.777971+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 4472832 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:36.778094+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 4440064 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:37.778200+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 2662400 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:38.778352+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79cb000/0x0/0x4ffc00000, data 0x3be3299/0x3ca0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566700 data_alloc: 234881024 data_used: 19333120
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:39.778584+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:40.778728+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:41.778817+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79cb000/0x0/0x4ffc00000, data 0x3be3299/0x3ca0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:42.778917+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:43.779025+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566700 data_alloc: 234881024 data_used: 19333120
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:44.779153+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.381125450s of 12.448619843s, submitted: 102
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:45.779264+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 5226496 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:46.779359+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f75bc000/0x0/0x4ffc00000, data 0x3be3299/0x3ca0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 5226496 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:47.779454+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab0e400 session 0x560c9d8150e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58d800 session 0x560c9d8152c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109666304 unmapped: 10952704 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9cb92000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:48.779587+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330151 data_alloc: 218103808 data_used: 9076736
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:49.779777+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:50.779938+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8bd7000/0x0/0x4ffc00000, data 0x25c9289/0x2685000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:51.780069+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:52.780146+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:53.780277+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8bd7000/0x0/0x4ffc00000, data 0x25c9289/0x2685000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330151 data_alloc: 218103808 data_used: 9076736
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:54.780377+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:55.780570+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:56.780716+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d290780
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9dab7400 session 0x560c9df7cd20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8bd7000/0x0/0x4ffc00000, data 0x25c9289/0x2685000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.680329323s of 11.851483345s, submitted: 377
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9aaf41e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:57.780824+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:58.780967+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150904 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:59.781087+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:00.781211+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:01.781343+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:02.781504+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:03.781616+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150904 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:04.781724+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:05.781838+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:06.781961+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:07.782068+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:08.782157+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150904 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:09.782261+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:10.782376+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:11.782476+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:12.782582+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab0e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.428812027s of 15.439086914s, submitted: 20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab0e400 session 0x560c9d645e00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9dae61e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d738b40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d5dc1e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab7400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9dab7400 session 0x560c9ab55a40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 24510464 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:13.782736+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f961e000/0x0/0x4ffc00000, data 0x1b84269/0x1c3e000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 24510464 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198552 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:14.782862+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 24510464 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9d739a40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:15.782989+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d645680
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9cf7a960
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9a89dc20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 25321472 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:16.784550+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab7400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 25305088 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:17.784745+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:18.784905+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1b84279/0x1c3f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240190 data_alloc: 218103808 data_used: 6066176
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:19.785049+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:20.785165+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:21.785278+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:22.785375+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:23.785476+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1b84279/0x1c3f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240190 data_alloc: 218103808 data_used: 6066176
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:24.785577+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:25.785698+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 23937024 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:26.785790+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.976077080s of 13.989899635s, submitted: 12
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1b84279/0x1c3f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 19996672 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d0a000/0x0/0x4ffc00000, data 0x2497279/0x2552000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:27.785873+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 21733376 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:28.786039+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 21733376 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312780 data_alloc: 218103808 data_used: 6541312
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:29.786174+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:30.786285+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:31.786395+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:32.786489+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:33.786604+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312780 data_alloc: 218103808 data_used: 6541312
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:34.786717+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:35.786824+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:36.786995+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:37.787154+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:38.787247+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312780 data_alloc: 218103808 data_used: 6541312
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:39.787406+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:40.787526+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:41.787632+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:42.787733+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:43.787855+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312932 data_alloc: 218103808 data_used: 6545408
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:44.787961+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:45.788080+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:46.788178+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:47.788278+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9b888d20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9dab7400 session 0x560c9da645a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:48.788371+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.863149643s of 21.914012909s, submitted: 80
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9dae5c20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160649 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:49.788460+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:50.788558+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:51.788738+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:52.788844+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:53.789149+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:54.789263+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160649 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:55.789429+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:56.789541+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:57.789732+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:58.789832+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:59.790011+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160649 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:00.790114+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:01.790277+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:02.790448+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:03.790554+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:04.790748+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160649 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:05.790906+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:06.791051+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:07.791167+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:08.791275+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.242803574s of 20.254222870s, submitted: 18
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d814960
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b88be00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9db541e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58d800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58d800 session 0x560c9d209680
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9df7c780
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:09.791412+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213863 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:10.791546+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:11.791678+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:12.791843+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:13.791937+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:14.792063+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213863 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:15.792189+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:16.792321+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:17.792454+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:18.792574+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.318160057s of 10.331529617s, submitted: 11
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9cecfc20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:19.792665+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 28049408 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217388 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:20.792736+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103317504 unmapped: 27803648 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:21.792888+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105193472 unmapped: 25927680 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:22.792995+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105193472 unmapped: 25927680 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:23.793093+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105193472 unmapped: 25927680 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:24.793236+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260232 data_alloc: 218103808 data_used: 6615040
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9481000/0x0/0x4ffc00000, data 0x1d21269/0x1ddb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:25.793416+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:26.793544+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:27.793641+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:28.793770+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9481000/0x0/0x4ffc00000, data 0x1d21269/0x1ddb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:29.793858+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260232 data_alloc: 218103808 data_used: 6615040
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.570923805s of 10.576161385s, submitted: 7
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:30.793996+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 22528000 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:31.794084+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 22650880 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:32.794211+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 22650880 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d40000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:33.794313+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 22650880 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:34.794441+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 22642688 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330668 data_alloc: 218103808 data_used: 7471104
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:35.794581+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 22642688 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:36.794672+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 22642688 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:37.794781+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 22642688 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d40000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:38.794920+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 22634496 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:39.795013+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 22634496 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330668 data_alloc: 218103808 data_used: 7471104
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:40.795106+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 22634496 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:41.795201+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 22626304 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d40000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:42.795341+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 22626304 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d40000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:43.795481+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 22626304 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:44.795578+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 22626304 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330668 data_alloc: 218103808 data_used: 7471104
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:45.795763+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d1d05a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d8503c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee000 session 0x560c9b474000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 22618112 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d20e1e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.934541702s of 15.981528282s, submitted: 74
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d81cd20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9dac5e00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d2092c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee400 session 0x560c9d5dc780
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee400 session 0x560c9cd65a40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:46.795890+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109633536 unmapped: 25165824 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2bb6279/0x2c71000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:47.796030+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109633536 unmapped: 25165824 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:48.796193+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:49.796334+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385271 data_alloc: 218103808 data_used: 7471104
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:50.796445+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9b978780
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:51.796552+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2bb6279/0x2c71000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d1d0780
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:52.796679+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d738f00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d644960
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:53.796844+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109625344 unmapped: 25174016 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:54.796991+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 25329664 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386529 data_alloc: 218103808 data_used: 7475200
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:55.797135+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 21127168 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f85ea000/0x0/0x4ffc00000, data 0x2bb6289/0x2c72000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:56.797243+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:57.797387+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:58.797498+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:59.797614+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431673 data_alloc: 234881024 data_used: 14135296
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:00.797804+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:01.797989+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f85ea000/0x0/0x4ffc00000, data 0x2bb6289/0x2c72000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 21061632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:02.798103+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 21061632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:03.798253+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 21061632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.221870422s of 18.247957230s, submitted: 23
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:04.798410+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115064832 unmapped: 19734528 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1462623 data_alloc: 234881024 data_used: 14249984
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:05.798565+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 19611648 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f81e5000/0x0/0x4ffc00000, data 0x2fba289/0x3076000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:06.798681+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 19611648 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:07.798863+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 19611648 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f81e5000/0x0/0x4ffc00000, data 0x2fba289/0x3076000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f81e5000/0x0/0x4ffc00000, data 0x2fba289/0x3076000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:08.799042+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 19578880 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:09.799185+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f81e5000/0x0/0x4ffc00000, data 0x2fba289/0x3076000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 19546112 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466451 data_alloc: 234881024 data_used: 14245888
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:10.799331+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 19546112 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:11.799477+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 19546112 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d2083c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9da65a40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d81d0e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:12.799648+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 22200320 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:13.799818+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 22200320 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:14.799974+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 22200320 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335504 data_alloc: 218103808 data_used: 7471104
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d5a000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9dae50e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.919174194s of 10.969374657s, submitted: 59
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9dac54a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d814000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:15.800158+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:16.800298+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:17.800439+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:18.800614+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:19.800798+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180974 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:20.800948+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:21.801111+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:22.801254+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:23.801367+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:24.801536+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180974 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:25.801738+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:26.801900+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:27.802049+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:28.802187+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:29.802299+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180974 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:30.802427+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:31.802552+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:32.802710+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.410537720s of 18.433294296s, submitted: 33
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9da652c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:33.802808+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [1])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d20c3c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 27156480 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9da650e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9db17a40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d81c780
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d5dd2c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9db57e00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:34.802946+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267502 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:35.803188+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:36.803373+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:37.803516+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:38.803723+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d7383c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc3000/0x0/0x4ffc00000, data 0x21df269/0x2299000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:39.803859+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267502 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee400 session 0x560c9d81c3c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9da65860
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9f000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d2dc960
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:40.804000+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 35389440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:41.804091+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 35389440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:42.804229+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 30842880 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:43.804330+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 30842880 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:44.804462+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 30842880 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345620 data_alloc: 234881024 data_used: 11042816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:45.804583+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 30842880 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:46.804682+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 30834688 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:47.804823+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 30834688 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:48.804965+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 30834688 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:49.805125+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 30834688 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345620 data_alloc: 234881024 data_used: 11042816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:50.805273+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111853568 unmapped: 30826496 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:51.805423+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.394321442s of 18.419612885s, submitted: 19
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111419392 unmapped: 31260672 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:52.805585+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 27025408 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:53.805723+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 27025408 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:54.805885+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 27025408 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416066 data_alloc: 234881024 data_used: 11665408
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:55.806055+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 26927104 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:56.806191+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 26927104 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:57.806328+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 26927104 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:58.806466+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:59.806608+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416082 data_alloc: 234881024 data_used: 11665408
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:00.806727+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:01.806893+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:02.807039+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:03.807146+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:04.807272+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416082 data_alloc: 234881024 data_used: 11665408
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:05.807422+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:06.807552+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 26910720 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:07.807679+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 26910720 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:08.807822+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 26910720 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:09.807976+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26902528 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416082 data_alloc: 234881024 data_used: 11665408
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:10.808115+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26902528 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:11.808216+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26902528 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9db57c20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4eec00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4eec00 session 0x560c9cd65a40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef000 session 0x560c9d5dc780
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef000 session 0x560c9d209680
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.403728485s of 20.452980042s, submitted: 60
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d2092c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dac4d20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4eec00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4eec00 session 0x560c9da64d20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef400 session 0x560c9a89d680
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9db563c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:12.808374+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:13.808548+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:14.808720+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f84d7000/0x0/0x4ffc00000, data 0x2cc9289/0x2d85000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448246 data_alloc: 234881024 data_used: 11665408
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:15.808898+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9a89de00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:16.809007+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4eec00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4eec00 session 0x560c9d81d4a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef000 session 0x560c9db572c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef800 session 0x560c9dac52c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:17.809110+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114900992 unmapped: 27779072 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:18.809206+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 26091520 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:19.809311+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471645 data_alloc: 234881024 data_used: 14168064
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f84b1000/0x0/0x4ffc00000, data 0x2ced2bc/0x2dab000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:20.809453+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:21.809563+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f84b1000/0x0/0x4ffc00000, data 0x2ced2bc/0x2dab000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:22.809663+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:23.809793+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:24.809932+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 26034176 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471645 data_alloc: 234881024 data_used: 14168064
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:25.810067+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 26034176 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:26.810184+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 26034176 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f84b1000/0x0/0x4ffc00000, data 0x2ced2bc/0x2dab000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:27.810348+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.588661194s of 15.604912758s, submitted: 23
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 23617536 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:28.810451+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 22052864 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:29.810601+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 23511040 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1557595 data_alloc: 234881024 data_used: 15020032
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:30.810726+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 23511040 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:31.810840+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 23511040 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:32.810938+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 23511040 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79f9000/0x0/0x4ffc00000, data 0x37a52bc/0x3863000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:33.811046+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 23502848 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79f9000/0x0/0x4ffc00000, data 0x37a52bc/0x3863000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:34.811188+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 23216128 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558307 data_alloc: 234881024 data_used: 15020032
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:35.811313+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 23216128 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:36.811426+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 23216128 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:37.811579+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 23216128 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79d8000/0x0/0x4ffc00000, data 0x37c62bc/0x3884000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:38.811710+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119472128 unmapped: 23207936 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:39.811800+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119472128 unmapped: 23207936 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558307 data_alloc: 234881024 data_used: 15020032
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.588848114s of 12.664453506s, submitted: 125
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:40.811928+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:41.812081+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:42.812199+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:43.812312+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79cd000/0x0/0x4ffc00000, data 0x37d12bc/0x388f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:44.812410+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558307 data_alloc: 234881024 data_used: 15020032
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:45.812528+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:46.812678+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:47.812875+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:48.813046+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:49.813210+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558227 data_alloc: 234881024 data_used: 15020032
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:50.813373+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:51.813497+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:52.813612+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:53.813723+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:54.813888+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558227 data_alloc: 234881024 data_used: 15020032
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:55.814041+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:56.814179+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:57.814302+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119537664 unmapped: 23142400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:58.814467+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119537664 unmapped: 23142400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:59.814719+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.197729111s of 19.202938080s, submitted: 5
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119554048 unmapped: 23126016 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558731 data_alloc: 234881024 data_used: 15020032
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:00.814849+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 23117824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:01.814970+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 23117824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:02.815103+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 23117824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:03.815234+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:04.815372+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558731 data_alloc: 234881024 data_used: 15020032
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:05.815538+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:06.815664+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:07.815816+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:08.816028+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:09.816170+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558731 data_alloc: 234881024 data_used: 15020032
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:10.816304+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:11.816459+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:12.816643+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:13.816842+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:14.816988+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558731 data_alloc: 234881024 data_used: 15020032
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:15.817156+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef800 session 0x560c9db554a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9cf7b4a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:16.817271+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.945161819s of 16.949874878s, submitted: 5
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d5dc1e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:17.817377+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8797000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:18.817499+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:19.817608+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422534 data_alloc: 234881024 data_used: 11665408
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:20.817737+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:21.817880+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8797000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9aaf41e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d208b40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d644960
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:22.817983+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:23.818119+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:24.818252+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202591 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:25.818391+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:26.818537+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:27.818682+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:28.818841+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:29.818987+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202591 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:30.819114+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:31.819245+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:32.819399+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:33.819503+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:34.819641+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202591 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:35.819800+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:36.819951+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:37.820091+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:38.820231+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.212566376s of 22.240785599s, submitted: 46
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9db56960
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d645e00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d20c3c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef800 session 0x560c9d644d20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9db17c20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9239000/0x0/0x4ffc00000, data 0x1f69269/0x2023000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:39.820374+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279011 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:40.820531+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9cf7a3c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d5dc960
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:41.820701+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dac4000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef800 session 0x560c9dac5a40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:42.820851+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:43.820999+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 31350784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:44.821097+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 29229056 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337825 data_alloc: 218103808 data_used: 8757248
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9238000/0x0/0x4ffc00000, data 0x1f69279/0x2024000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:45.821238+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 29138944 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:46.821574+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 29138944 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9cf7af00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b8892c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:47.821672+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d20d860
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:48.821807+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:49.821944+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210448 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:50.822076+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:51.822335+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:52.822447+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:53.822604+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:54.822747+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210448 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:55.822903+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:56.823072+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:57.823228+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.394531250s of 19.422958374s, submitted: 34
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d630d20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4eec00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4eec00 session 0x560c9dac5e00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9db55a40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d1d0d20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d645860
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:58.823344+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:59.823464+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288735 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:00.823629+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d5dda40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ef000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef000 session 0x560c9d5ddc20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:01.823807+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9247000/0x0/0x4ffc00000, data 0x1f5b269/0x2015000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9db16f00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9cf7a000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:02.823941+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9247000/0x0/0x4ffc00000, data 0x1f5b269/0x2015000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:03.824069+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 28672000 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:04.824190+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9a89d4a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9b906b40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1f5b279/0x2016000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 28672000 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356453 data_alloc: 234881024 data_used: 10121216
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4efc00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4efc00 session 0x560c9d20f860
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:05.824308+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:06.824419+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:07.824558+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:08.824719+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:09.824899+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219082 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:10.825099+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:11.825288+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:12.825420+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:13.825819+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:14.825977+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219082 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:15.826144+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:16.826275+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:17.826409+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:18.826555+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:19.826663+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219082 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:20.826789+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:21.826963+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9cece000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d20fe00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d20e1e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:22.827078+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9b88a1e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9dab3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.325824738s of 24.373125076s, submitted: 57
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9dab3000 session 0x560c9b88b0e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9b88ad20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d814000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d815e00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9aaf41e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 36405248 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:23.827191+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 36405248 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:24.827316+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3c00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3c00 session 0x560c9d645e00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 36405248 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318042 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9dac4000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:25.827431+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x22e2269/0x239c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d5dc960
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9b8892c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 36397056 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:26.827561+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a2800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 36397056 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:27.827721+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:28.827854+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:29.827988+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401186 data_alloc: 234881024 data_used: 12648448
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:30.828132+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x22e2269/0x239c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:31.828267+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:32.828455+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:33.828611+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:34.828733+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x22e2269/0x239c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401186 data_alloc: 234881024 data_used: 12648448
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:35.828920+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:36.829090+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 33054720 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.557071686s of 14.579683304s, submitted: 17
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:37.829205+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 26443776 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:38.829350+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 25993216 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:39.829475+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8595000/0x0/0x4ffc00000, data 0x2c04269/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 25993216 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488802 data_alloc: 234881024 data_used: 13692928
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:40.829617+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 25903104 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:41.829755+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 25903104 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:42.829885+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8595000/0x0/0x4ffc00000, data 0x2c04269/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 25870336 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:43.830055+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:44.830198+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483322 data_alloc: 234881024 data_used: 13692928
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:45.830407+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:46.830597+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:47.830784+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:48.830964+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.212394714s of 11.282471657s, submitted: 85
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f859b000/0x0/0x4ffc00000, data 0x2c07269/0x2cc1000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dac5e00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a2800 session 0x560c9b906960
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d2090e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:49.831138+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232474 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:50.831338+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:51.831502+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:52.831741+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:53.831981+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:54.832170+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232474 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:55.832648+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:56.832837+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:57.833044+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:58.833248+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:59.833459+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232474 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:00.833635+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:01.833830+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:02.833996+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:03.834194+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:04.834352+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232474 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:05.834537+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a2800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.065891266s of 17.081371307s, submitted: 23
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a2800 session 0x560c9d5b7680
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d208000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9b889a40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d738f00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9aaf41e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:06.834760+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:07.834917+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x20262cb/0x20e1000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:08.835077+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:09.835227+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a2800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a2800 session 0x560c9dac52c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315745 data_alloc: 218103808 data_used: 290816
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:10.835390+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:11.835555+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 31391744 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:12.835718+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x20262cb/0x20e1000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:13.835832+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:14.835951+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385209 data_alloc: 234881024 data_used: 10616832
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:15.836080+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:16.836202+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:17.836302+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x20262cb/0x20e1000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.787096977s of 12.822224617s, submitted: 36
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dae7860
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807c00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9dac43c0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:18.836398+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d2ddc20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d20d4a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807c00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9db545a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115589120 unmapped: 30769152 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:19.836498+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115589120 unmapped: 30769152 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429577 data_alloc: 234881024 data_used: 10629120
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:20.836591+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9aa9e400
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9cece000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 24887296 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:21.836727+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a2800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a2800 session 0x560c9cecfe00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f7fb7000/0x0/0x4ffc00000, data 0x31ea2cb/0x32a5000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d5dd0e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d5dd4a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 23437312 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:22.836847+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807c00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 23429120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:23.836955+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126525440 unmapped: 19832832 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:24.837088+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126631936 unmapped: 19726336 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1565980 data_alloc: 234881024 data_used: 16048128
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:25.837228+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:26.837378+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f7fb6000/0x0/0x4ffc00000, data 0x31ea2db/0x32a6000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:27.837510+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:28.837622+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:29.837759+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1565204 data_alloc: 234881024 data_used: 16052224
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:30.837854+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:31.837956+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:32.838057+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f7f95000/0x0/0x4ffc00000, data 0x320b2db/0x32c7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.480135918s of 14.569817543s, submitted: 128
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127311872 unmapped: 19046400 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:33.838257+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 18563072 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:34.838388+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 18563072 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616698 data_alloc: 234881024 data_used: 16429056
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:35.838524+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 18563072 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:36.838681+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 18563072 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79f7000/0x0/0x4ffc00000, data 0x37a12db/0x385d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:37.838818+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127811584 unmapped: 18546688 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:38.838916+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127811584 unmapped: 18546688 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:39.839040+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 18399232 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1612226 data_alloc: 234881024 data_used: 16429056
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:40.839140+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79e0000/0x0/0x4ffc00000, data 0x37c02db/0x387c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 18399232 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:41.839240+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 18391040 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:42.839381+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 18391040 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:43.839484+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 18391040 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79dd000/0x0/0x4ffc00000, data 0x37c32db/0x387f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:44.839621+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 18391040 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1612618 data_alloc: 234881024 data_used: 16429056
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:45.839788+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79dd000/0x0/0x4ffc00000, data 0x37c32db/0x387f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.750670433s of 12.807613373s, submitted: 72
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 128081920 unmapped: 18276352 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:46.839949+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 128081920 unmapped: 18276352 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:47.840056+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 128081920 unmapped: 18276352 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:48.840188+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d5dd680
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9d6441e0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79cf000/0x0/0x4ffc00000, data 0x37d12db/0x388d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab01000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab01000 session 0x560c9b88b860
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 21946368 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:49.840319+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 21946368 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1486899 data_alloc: 234881024 data_used: 10125312
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:50.840444+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 21946368 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:51.840573+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 21946368 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:52.840669+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b474960
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9a89cd20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d20c000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:53.840741+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:54.840982+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253053 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:55.841182+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:56.841325+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:57.841447+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:58.841556+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:59.841662+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253053 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:00.841723+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:01.841824+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:02.841918+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:03.842034+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:04.842167+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253053 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:05.842309+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:06.842445+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:07.842549+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:08.842683+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:09.842832+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253053 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:10.842940+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807c00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.960874557s of 25.003011703s, submitted: 65
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9db57c20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab01000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab01000 session 0x560c9b888f00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a3000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9aaf45a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9cecfa40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807c00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9dac45a0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:11.843080+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29941760 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:12.843196+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29941760 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab01000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab01000 session 0x560c9b474d20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18ec269/0x19a6000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9cd65a40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:13.843296+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29941760 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9e4ee800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dac5c20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d1d1a40
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807c00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9ab01000
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:14.843411+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 29630464 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:15.843546+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306517 data_alloc: 218103808 data_used: 3018752
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:16.843681+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:17.843811+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:18.843936+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9891000/0x0/0x4ffc00000, data 0x1910279/0x19cb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9891000/0x0/0x4ffc00000, data 0x1910279/0x19cb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:19.844727+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:20.845005+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306517 data_alloc: 218103808 data_used: 3018752
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9891000/0x0/0x4ffc00000, data 0x1910279/0x19cb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:21.845098+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:22.845194+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:23.845305+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.842787743s of 12.852742195s, submitted: 11
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:24.845401+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8e2a000/0x0/0x4ffc00000, data 0x236b279/0x2426000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 123723776 unmapped: 22634496 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:25.845507+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390535 data_alloc: 218103808 data_used: 3342336
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:26.845612+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:27.845721+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:28.845862+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:29.845971+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8e07000/0x0/0x4ffc00000, data 0x2381279/0x243c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:30.846089+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390551 data_alloc: 218103808 data_used: 3342336
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:31.846197+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:32.846306+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8e07000/0x0/0x4ffc00000, data 0x2381279/0x243c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:33.846440+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:34.846606+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9d5dcd20
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab01000 session 0x560c9ab54780
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9d58c800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.491474152s of 11.555577278s, submitted: 110
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9b888960
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:35.846819+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:36.846937+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:37.847063+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:38.847199+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:39.847330+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:40.847448+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:41.847561+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:42.847714+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:43.847863+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:44.848000+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:45.848169+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:46.848308+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:47.848421+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:48.848554+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:49.848667+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:50.848808+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:51.848945+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:52.849046+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:53.849188+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:54.849346+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:55.849493+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:56.849632+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:57.849726+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:58.849862+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:59.849993+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:00.850122+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:01.850232+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:02.850340+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:03.850448+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:04.850573+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:05.850718+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:06.851789+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:07.851893+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:08.852030+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:09.852163+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:10.852305+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:11.852533+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:12.852664+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:13.852835+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:14.852976+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:15.853137+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:16.853251+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:17.853348+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:18.853503+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:19.853642+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:20.853759+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 1800.0 total, 600.0 interval
                                          Cumulative writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                          Cumulative WAL: 12K writes, 3773 syncs, 3.36 writes per sync, written: 0.03 GB, 0.02 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 3402 writes, 12K keys, 3402 commit groups, 1.0 writes per commit group, ingest: 13.97 MB, 0.02 MB/s
                                          Interval WAL: 3402 writes, 1492 syncs, 2.28 writes per sync, written: 0.01 GB, 0.02 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:21.853886+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:22.853994+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:23.854132+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:24.854245+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:25.854379+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:26.854492+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:27.854596+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:28.854708+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:29.854855+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:30.855010+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:31.855117+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:32.855221+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'config diff' '{prefix=config diff}'
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:33.855320+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'config show' '{prefix=config show}'
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'counter dump' '{prefix=counter dump}'
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 29810688 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'counter schema' '{prefix=counter schema}'
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:34.855422+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 29843456 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:35.855549+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'log dump' '{prefix=log dump}'
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127524864 unmapped: 29876224 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'perf dump' '{prefix=perf dump}'
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'perf schema' '{prefix=perf schema}'
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:36.855653+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 40976384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:37.855732+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 40976384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:38.855822+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 40976384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:39.855923+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 40976384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:40.856034+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 40976384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:41.856126+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 40976384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:42.856235+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 40976384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:43.856330+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 40976384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:44.856424+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 40976384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:45.856545+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 40976384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:46.856669+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 40976384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:47.856753+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 40976384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:48.856895+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 40976384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:49.856992+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 40976384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:50.857092+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 40968192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:51.857219+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 40968192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:52.857338+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 40968192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:53.857440+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 40968192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:54.857546+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 40968192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:55.857676+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 40968192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:56.857793+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 40968192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:57.857914+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 40968192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:58.858013+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 40968192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:59.858122+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 40968192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:00.858255+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 40968192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:01.858339+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 40968192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:02.858423+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 40968192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:03.858590+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 40968192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:04.858674+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:05.858784+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:06.858872+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:07.858955+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:08.859048+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:09.859146+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:10.859267+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:11.859360+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:12.859462+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:13.859550+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:14.859635+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:15.859714+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:16.859798+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:17.859889+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:18.859975+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:19.860059+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:20.860146+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:21.860249+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:22.860356+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:23.860458+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:24.860571+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:25.860733+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:26.860835+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:27.860949+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:28.861069+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:29.861180+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:30.861292+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:31.861413+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 40960000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:32.861554+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 40951808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:33.861649+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 40951808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:34.861780+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 40951808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:35.861937+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 40951808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:36.862070+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 40951808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:37.862200+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 40951808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:38.862299+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 40951808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:39.862434+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 40951808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:40.862584+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 40951808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:41.862714+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 40951808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:42.862824+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 40951808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:43.862916+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 40951808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:44.863046+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116449280 unmapped: 40951808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 130.150497437s of 130.159408569s, submitted: 17
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:45.863217+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 40902656 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:46.863319+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 42262528 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:47.863478+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 42262528 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:48.863616+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 42262528 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:49.863734+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 42262528 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:50.863865+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 42262528 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:51.863990+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115138560 unmapped: 42262528 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:52.864119+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 42254336 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:53.864219+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115154944 unmapped: 42246144 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:54.864337+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115154944 unmapped: 42246144 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:55.864491+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 42237952 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:56.864580+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 42237952 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:57.864726+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 42237952 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:58.864845+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 42237952 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:59.864990+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 42237952 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:00.865116+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115163136 unmapped: 42237952 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:01.865240+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 42229760 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:02.865643+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 42229760 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:03.865772+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 42229760 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:04.865907+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 42229760 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:05.866057+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 42229760 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:06.866182+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 42229760 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:07.866310+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 42229760 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:08.866459+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115171328 unmapped: 42229760 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:09.866598+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 42221568 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:10.866749+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 42221568 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:11.866909+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 42221568 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:12.867056+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 42221568 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:13.867192+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 42221568 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:14.867338+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 42221568 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:15.867490+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115179520 unmapped: 42221568 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:16.867620+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 42213376 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:17.867772+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 42213376 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:18.867915+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 42213376 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:19.868051+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 42213376 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: mgrc ms_handle_reset ms_handle_reset con 0x560c9a079c00
Oct 09 10:10:50 compute-1 ceph-osd[7514]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3631142817
Oct 09 10:10:50 compute-1 ceph-osd[7514]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3631142817,v1:192.168.122.100:6801/3631142817]
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: get_auth_request con 0x560c9dab3000 auth_method 0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: mgrc handle_mgr_configure stats_period=5
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:20.868195+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 42016768 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9acc9400 session 0x560c9cd64960
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9a807800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ade0c00 session 0x560c9d291680
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: handle_auth_request added challenge on 0x560c9c8a0800
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:21.868322+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 42016768 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:22.868440+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 42016768 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:23.868535+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 42016768 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:24.868677+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 42016768 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:25.868850+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 42016768 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:26.868996+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 42016768 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:27.869103+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 42016768 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:28.869249+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 42016768 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:29.869409+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 42016768 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:30.869575+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 42016768 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:31.869736+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 42016768 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:32.870087+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 42016768 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:33.870235+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:34.870394+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:35.870588+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:36.870741+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:37.870896+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:38.871048+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:39.871177+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:40.871283+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:41.871414+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:42.871521+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:43.871623+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:44.871777+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:45.871942+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:46.872081+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:47.872205+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:48.872345+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:49.872475+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:50.872574+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:51.872732+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:52.872863+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:53.872965+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:54.873098+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:55.873227+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:56.873344+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:57.873450+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:58.873599+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:59.873752+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:00.873871+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:01.874017+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:02.874253+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:03.874392+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:04.874566+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:05.874749+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:06.874925+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:07.875097+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:08.875222+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:09.875365+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:10.875499+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:11.875630+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:12.875778+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:13.875881+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:14.876088+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:15.876288+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:16.876448+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:17.876587+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:18.876736+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:19.876870+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:20.877019+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:21.877156+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:22.877276+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:23.877377+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:24.877510+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:25.877718+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:26.877879+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:27.878031+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:28.878160+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:29.878277+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:30.878421+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:31.878557+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:32.878749+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:33.878906+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:34.879037+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:35.879198+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:36.879334+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:37.879477+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:38.879614+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:39.879777+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:40.879943+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:41.880115+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:42.880227+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:43.880362+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 42008576 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:44.880481+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:45.880634+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:46.880780+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:47.880927+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:48.881088+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:49.881264+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:50.881397+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:51.881530+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:52.881662+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:53.881789+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:54.881965+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:55.882168+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:56.882311+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:57.882452+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:58.882582+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:59.882736+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:00.883031+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:01.883162+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:02.883274+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:03.883775+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:04.883904+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:05.884026+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:06.884123+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:07.884228+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:08.884323+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115400704 unmapped: 42000384 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:09.884413+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:10.884549+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:11.884683+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:12.884837+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:13.884970+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:14.885104+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:15.885258+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:16.885402+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:17.885531+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:18.885682+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:19.885836+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:20.885967+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:21.886096+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:22.886242+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:23.886373+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:24.886507+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:25.886683+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:26.886838+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:27.887001+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:28.887152+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:29.887321+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:30.887435+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115408896 unmapped: 41992192 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:31.887573+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:32.887733+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:33.887832+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:34.887942+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:35.888074+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:36.888199+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:37.888335+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:38.888471+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:39.888642+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:40.888941+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:41.889077+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:42.889211+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:43.889311+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:44.889445+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:45.889593+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:46.889720+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:47.889851+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:48.889981+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:49.890146+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:50.890307+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:51.890449+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:50.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:52.890606+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115417088 unmapped: 41984000 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:53.890727+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 41975808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:54.890861+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 41975808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:55.891074+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 41975808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:56.891191+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 41975808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:57.891287+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 41975808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:58.891425+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 41975808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:59.891544+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 41975808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:00.891672+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 41975808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:01.891778+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 41975808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:02.891882+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 41975808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:03.891983+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 41975808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:04.892116+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115425280 unmapped: 41975808 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:05.892258+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:06.892386+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:07.892505+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:08.892634+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:09.892760+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:10.892864+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:11.892995+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:12.893100+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:13.893271+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:14.893366+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:15.893490+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:16.893655+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:17.893771+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:18.893864+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:19.893990+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:20.894117+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 41967616 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:21.894249+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:22.894810+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:23.894927+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:24.895045+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:25.895177+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:26.895299+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:27.895455+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:28.895576+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:29.895728+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:30.895828+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:31.895952+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:32.896108+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:33.896262+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:34.896374+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:35.896490+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:36.896601+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 41951232 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:37.896728+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 41943040 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:38.896814+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 41943040 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:39.896903+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 41943040 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:40.897028+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 41943040 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:41.897127+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 41943040 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:42.897236+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115458048 unmapped: 41943040 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:43.897379+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115466240 unmapped: 41934848 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:44.897534+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115466240 unmapped: 41934848 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:45.897712+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115466240 unmapped: 41934848 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:46.898008+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115466240 unmapped: 41934848 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:47.898149+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115466240 unmapped: 41934848 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:48.898310+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:49.898498+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115466240 unmapped: 41934848 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:50.898743+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 41926656 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:51.898927+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 41926656 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:52.899096+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 41926656 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:53.899299+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 41926656 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:54.899484+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 41926656 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:55.899711+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 41926656 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:56.899920+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115482624 unmapped: 41918464 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:57.900138+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115482624 unmapped: 41918464 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:58.900355+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115482624 unmapped: 41918464 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:59.900546+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115482624 unmapped: 41918464 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:00.900731+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115482624 unmapped: 41918464 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:01.900900+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115482624 unmapped: 41918464 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:02.901112+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115490816 unmapped: 41910272 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:03.901326+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115490816 unmapped: 41910272 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:04.901593+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115490816 unmapped: 41910272 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:05.901929+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115490816 unmapped: 41910272 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:06.902143+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115490816 unmapped: 41910272 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:07.902368+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115490816 unmapped: 41910272 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:08.902545+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115490816 unmapped: 41910272 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:09.902747+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115490816 unmapped: 41910272 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:10.902898+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115499008 unmapped: 41902080 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:11.903051+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115499008 unmapped: 41902080 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:12.903217+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115499008 unmapped: 41902080 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:13.903382+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115499008 unmapped: 41902080 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:14.903517+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115499008 unmapped: 41902080 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:15.903662+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115499008 unmapped: 41902080 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:16.903793+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115499008 unmapped: 41902080 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:17.903925+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115499008 unmapped: 41902080 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'config diff' '{prefix=config diff}'
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'config show' '{prefix=config show}'
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'counter dump' '{prefix=counter dump}'
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:50 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:18.904020+0000)
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'counter schema' '{prefix=counter schema}'
Oct 09 10:10:50 compute-1 ceph-osd[7514]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 09 10:10:51 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115343360 unmapped: 42057728 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:51 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x3d8f9c1), peers [1,2] op hist [])
Oct 09 10:10:51 compute-1 ceph-osd[7514]: monclient: tick
Oct 09 10:10:51 compute-1 ceph-osd[7514]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-1 ceph-osd[7514]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:19.904111+0000)
Oct 09 10:10:51 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115302400 unmapped: 42098688 heap: 157401088 old mem: 2845415833 new mem: 2845415833
Oct 09 10:10:51 compute-1 ceph-osd[7514]: do_command 'log dump' '{prefix=log dump}'
Oct 09 10:10:51 compute-1 rsyslogd[1241]: imjournal from <compute-1:ceph-osd>: begin to drop messages due to rate-limiting
Oct 09 10:10:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 09 10:10:51 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3301407202' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.19014 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.28871 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.28627 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.19038 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.28895 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.28901 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.19062 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.28928 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/319661387' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/563879597' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2786971522' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1031980098' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/852438262' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1824377817' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3301407202' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3495633622' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 09 10:10:51 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2806679365' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 09 10:10:51 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1361745880' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:10:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:51.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:52 compute-1 crontab[186538]: (root) LIST (root)
Oct 09 10:10:52 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 09 10:10:52 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/530546311' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:10:52 compute-1 ceph-mon[9795]: from='client.28666 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:52 compute-1 ceph-mon[9795]: from='client.19095 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:52 compute-1 ceph-mon[9795]: from='client.28949 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:52 compute-1 ceph-mon[9795]: from='client.28952 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:52 compute-1 ceph-mon[9795]: pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:52 compute-1 ceph-mon[9795]: from='client.19122 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:52 compute-1 ceph-mon[9795]: from='client.19128 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:52 compute-1 ceph-mon[9795]: from='client.28732 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:52 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2806679365' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:52 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1361745880' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:10:52 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1746788196' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:10:52 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/544842412' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:10:52 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 09 10:10:52 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1702982534' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:10:52 compute-1 nova_compute[162974]: 2025-10-09 10:10:52.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:52.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 09 10:10:53 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2816554851' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 09 10:10:53 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2768663498' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.19152 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.28744 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.28994 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.19179 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.29015 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.19188 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.19200 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.28789 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.28798 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/530546311' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3519033552' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1702982534' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2369584623' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2677920078' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2283778781' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/26126343' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2816554851' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2768663498' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct 09 10:10:53 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3631243834' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct 09 10:10:53 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2628658671' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:10:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:10:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:53.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:10:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct 09 10:10:53 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3683853823' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 09 10:10:53 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1463422601' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:10:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 09 10:10:53 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2091756922' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 09 10:10:54 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3824702024' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct 09 10:10:54 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3185679173' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 09 10:10:54 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2029986551' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.29066 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.19230 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.28828 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.29090 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.19257 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.28843 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.28858 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.19284 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.28879 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3179625250' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3631243834' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2309428589' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2628658671' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/651014448' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3683853823' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3789920477' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1463422601' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4174094474' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2091756922' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1542477114' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1922145410' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3824702024' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3185679173' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3029160739' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2029986551' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3127340658' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 09 10:10:54 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/945918695' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:10:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct 09 10:10:54 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2211374146' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-1 sudo[186940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:10:54 compute-1 sudo[186940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:54 compute-1 sudo[186940]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 09 10:10:54 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2098841684' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:10:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:54.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 09 10:10:55 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1069066591' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 09 10:10:55 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1671176700' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 09 10:10:55 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/114702688' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-1 systemd[1]: Starting Hostname Service...
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/423818157' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/487013521' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/945918695' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2211374146' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1111936001' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3952302202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1319811705' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2098841684' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3615703962' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1935202965' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1069066591' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2818391916' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1215003925' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1671176700' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2133360876' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/114702688' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/955270682' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:10:55 compute-1 systemd[1]: Started Hostname Service.
Oct 09 10:10:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 09 10:10:55 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/754805668' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:55.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:55 compute-1 nova_compute[162974]: 2025-10-09 10:10:55.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct 09 10:10:55 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2583758200' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:56 compute-1 ceph-mon[9795]: pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:56 compute-1 ceph-mon[9795]: from='client.29261 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3828484366' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:10:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/754805668' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:10:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2583758200' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:10:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/292489141' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:10:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1279277176' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:10:56 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2320038838' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:10:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct 09 10:10:56 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3089908545' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:10:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:56.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 09 10:10:57 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/467309719' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 09 10:10:57 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2449000458' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.29288 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.29050 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.29062 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.29327 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.29324 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.19476 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.29345 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.29351 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.19500 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/3089908545' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/163346850' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1002807148' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/467309719' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1326728277' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3778750636' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2449000458' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 09 10:10:57 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4089750137' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:57.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:57 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:57 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:57 compute-1 nova_compute[162974]: 2025-10-09 10:10:57.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:58 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 09 10:10:58 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/689574017' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.29366 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.29378 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.19527 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.29402 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.29408 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.19545 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.29134 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.29438 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.29450 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.29158 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/4089750137' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.29468 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1228437802' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.19608 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.29194 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1307369467' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.29501 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/689574017' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/893082231' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.29215 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.29221 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='client.29540 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 09 10:10:58 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/371431490' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:10:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:58.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:59 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct 09 10:10:59 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1989934966' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:10:59 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct 09 10:10:59 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/642372660' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:10:59 compute-1 ceph-mon[9795]: from='client.19677 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:59 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3962151522' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:10:59 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:59 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:59 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/371431490' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:10:59 compute-1 ceph-mon[9795]: from='client.29281 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:59 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3666300404' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:10:59 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/1989934966' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:10:59 compute-1 ceph-mon[9795]: pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:59 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/4210533807' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:10:59 compute-1 ceph-mon[9795]: from='client.19728 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:59 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/642372660' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:10:59 compute-1 podman[187738]: 2025-10-09 10:10:59.565512317 +0000 UTC m=+0.075452409 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:10:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct 09 10:10:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:59.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:11:00 compute-1 nova_compute[162974]: 2025-10-09 10:11:00.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:11:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct 09 10:11:00 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2134490318' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 09 10:11:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/3312017814' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:11:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3882489346' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:11:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/327615863' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 09 10:11:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/577281463' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:11:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2780871879' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:11:00 compute-1 ceph-mon[9795]: from='client.29627 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:11:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2590067367' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 09 10:11:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/764128313' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:11:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2134490318' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
